Tune in on Wednesday at 9am PST when our CEO Jensen Huang takes the stage at our flagship GPU developer event, #GTC17.
https://www.facebook.com/NVIDIA/
http://www.gputechconf.com/
Earlier today, Nvidia announce Metropolis
NVIDIA
16 hrs ·
Today we unveiled our Metropolis edge-to-cloud platform for video analytics to make cities smarter and safer. More than 50 NVIDIA AI city partners provide applications that use deep learning on GPUs, many on display at #GTC17.
NVIDIA Paves Path to AI Cities with Metropolis Edge-to-Cloud Platform for Video Analytics
Paving the way for the creation of AI cities, NVIDIA today unveiled the NVIDIA Metropolis™ intelligent video analytics platform.
NVIDIANEWS.NVIDIA.COM
[img]http://assets.hardwarezone.com/img/2017/05/nvidia-metropolis-sample-image.jpg[/img]
[img]https://developer.nvidia.com/sites/default/files/styles/main_preview/public/akamai/blogs/images/NVIDIA-Metropolis-270x151.png?itok=TnccYd3c[/img]
http://nvidianews.nvidia.com/news/nvidia-paves-path-to-ai-cities-with-metropolis-edge-to-cloud-platform-for-video-analytics?ncid=so-fac-ms-14014
Hmm, I wonder what George Orwell is thinking right now. God rest his soul
NVIDIA
16 hrs ·
Today we unveiled our Metropolis edge-to-cloud platform for video analytics to make cities smarter and safer. More than 50 NVIDIA AI city partners provide applications that use deep learning on GPUs, many on display at #GTC17.
NVIDIA Paves Path to AI Cities with Metropolis Edge-to-Cloud Platform for Video Analytics
Paving the way for the creation of AI cities, NVIDIA today unveiled the NVIDIA Metropolis™ intelligent video analytics platform.
NVIDIANEWS.NVIDIA.COM
Other creepy AI stuff
https://news.developer.nvidia.com/artificial-intelligence-generates-christmas-song-from-holiday-image/
November 30, 2016
Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image.
https://www.youtube.com/watch?v=38hxfC6M-pg
_______________________________________________________________________________________________________
Recreate Any Voice Using One Minute of Sample Audio
https://news.developer.nvidia.com/recreate-any-voice-using-one-minute-of-sample-audio/
April 27, 2017
2 Comments 547 Shares
A Montreal-based startup developed a set of deep learning algorithms that can copy anyone’s voice with only 60 seconds of sample audio.
Lyrebird, a startup spin-off from the MILA lab at University of Montréal and advised by Aaron Courville and Yoshua Bengio claims to be the first of its kind to allow copying voices in a matter of minutes and control the emotion of the generation.
(head over to the sound bite at the link above, it has mimics of Obama and Trump)
______________________________________________________________________________________________________
Human-like Character Animation System Uses AI to Navigate Terrains
https://news.developer.nvidia.com/human-like-character-animation-system-uses-ai-to-navigate-terrains/
May 2, 2017
Comments 227 Shares
Researchers from University of Edinburgh and Method Studios developed a real-time character control mechanism using deep learning that can help virtual characters walk, run and jump a little more naturally.
https://www.youtube.com/watch?v=Ul0Gilv5wvY
Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image.
April 27, 2017
2 Comments 547 Shares
A Montreal-based startup developed a set of deep learning algorithms that can copy anyone’s voice with only 60 seconds of sample audio.
Lyrebird, a startup spin-off from the MILA lab at University of Montréal and advised by Aaron Courville and Yoshua Bengio claims to be the first of its kind to allow copying voices in a matter of minutes and control the emotion of the generation.
(head over to the sound bite at the link above, it has mimics of Obama and Trump)
______________________________________________________________________________________________________
Human-like Character Animation System Uses AI to Navigate Terrains
May 2, 2017
Comments 227 Shares
Researchers from University of Edinburgh and Method Studios developed a real-time character control mechanism using deep learning that can help virtual characters walk, run and jump a little more naturally.
Something very awesome that they have made possible with Jetson
Helping the Blind Navigate the World
Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired.
To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at.
https://www.youtube.com/watch?v=9TEJC5fXnu8
Saverio Murgia, Horus CEO and co-founder, was inspired to create his company two years ago after meeting a blind person on the street who asked for help finding a bus stop.
Something very awesome that they have made possible with Jetson
Helping the Blind Navigate the World
Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired.
To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at.
Saverio Murgia, Horus CEO and co-founder, was inspired to create his company two years ago after meeting a blind person on the street who asked for help finding a bus stop.
It also seems that Nvidia polished their site a little bit during the maintenance yesterday.
Like over at the old driver page, the search function has been changed.
http://www.nvidia.com/Download/index.aspx?lang=en-us
And the results in some of the links have a lil more pizzazz
https://blogs.nvidia.com/blog/tag/gtc-2017/
The technology drop down menu is no longer present on the global page. That may have changed sometime in the past, I can't say because I haven't been on the page in quite awhile.
http://www.nvidia.com/page/home.html
It sucks, because with the new banner and drop down menus, you can't navigate to the old 3D Vision requirements site like before :(
http://www.nvidia.com/object/3d-vision-main.html
The technology drop down menu is no longer present on the global page. That may have changed sometime in the past, I can't say because I haven't been on the page in quite awhile.
http://www.nvidia.com/page/home.html
Jen-Hsun Huang talked mostly about that Moore's Law no longer applies, Deep Learning and AI.
He also talks mad crazy stuff about 7 XO Flops, 20 XO Flops, 105 XO Flops..........
Announced Telsa Volta V100
You can see a recording of the 2hr event at (scroll down a lil bit)
http://wccftech.com/watch-nvidias-gtc-2017-keynote-volta-supercomputing-9-pt-may-10th/
He also talked about project Holodeck
[img]https://edge.alluremedia.com.au/m/g/2017/05/SAM4171.gif[/img]
That and their new server, with Tesla V100 GPUs
https://devblogs.nvidia.com/parallelforall/inside-volta/
http://www.itworld.com/article/3196093/data-center/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html
Dave's nightmare is becoming a reality
https://www.youtube.com/watch?v=7qnd-hdmgfk
https://www.facebook.com/NVIDIA/
http://www.gputechconf.com/
Earlier today, Nvidia announce Metropolis
NVIDIA
16 hrs ·
Today we unveiled our Metropolis edge-to-cloud platform for video analytics to make cities smarter and safer. More than 50 NVIDIA AI city partners provide applications that use deep learning on GPUs, many on display at #GTC17.
NVIDIA Paves Path to AI Cities with Metropolis Edge-to-Cloud Platform for Video Analytics
Paving the way for the creation of AI cities, NVIDIA today unveiled the NVIDIA Metropolis™ intelligent video analytics platform.
NVIDIANEWS.NVIDIA.COM
http://nvidianews.nvidia.com/news/nvidia-paves-path-to-ai-cities-with-metropolis-edge-to-cloud-platform-for-video-analytics?ncid=so-fac-ms-14014
Hmm, I wonder what George Orwell is thinking right now. God rest his soul
https://news.developer.nvidia.com/artificial-intelligence-generates-christmas-song-from-holiday-image/
November 30, 2016
Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image.
_______________________________________________________________________________________________________
Recreate Any Voice Using One Minute of Sample Audio
https://news.developer.nvidia.com/recreate-any-voice-using-one-minute-of-sample-audio/
April 27, 2017
2 Comments 547 Shares
A Montreal-based startup developed a set of deep learning algorithms that can copy anyone’s voice with only 60 seconds of sample audio.
Lyrebird, a startup spin-off from the MILA lab at University of Montréal and advised by Aaron Courville and Yoshua Bengio claims to be the first of its kind to allow copying voices in a matter of minutes and control the emotion of the generation.
(head over to the sound bite at the link above, it has mimics of Obama and Trump)
______________________________________________________________________________________________________
Human-like Character Animation System Uses AI to Navigate Terrains
https://news.developer.nvidia.com/human-like-character-animation-system-uses-ai-to-navigate-terrains/
May 2, 2017
Comments 227 Shares
Researchers from University of Edinburgh and Method Studios developed a real-time character control mechanism using deep learning that can help virtual characters walk, run and jump a little more naturally.
Helping the Blind Navigate the World
Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired.
To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at.
Saverio Murgia, Horus CEO and co-founder, was inspired to create his company two years ago after meeting a blind person on the street who asked for help finding a bus stop.
Like over at the old driver page, the search function has been changed.
http://www.nvidia.com/Download/index.aspx?lang=en-us
And the results in some of the links have a lil more pizzazz
https://blogs.nvidia.com/blog/tag/gtc-2017/
The technology drop down menu is no longer present on the global page. That may have changed sometime in the past, I can't say because I haven't been on the page in quite awhile.
http://www.nvidia.com/page/home.html
It sucks, because with the new banner and drop down menus, you can't navigate to the old 3D Vision requirements site like before :(
http://www.nvidia.com/object/3d-vision-main.html
He also talks mad crazy stuff about 7 XO Flops, 20 XO Flops, 105 XO Flops..........
Announced Telsa Volta V100
You can see a recording of the 2hr event at (scroll down a lil bit)
http://wccftech.com/watch-nvidias-gtc-2017-keynote-volta-supercomputing-9-pt-may-10th/
He also talked about project Holodeck
https://devblogs.nvidia.com/parallelforall/inside-volta/
http://www.itworld.com/article/3196093/data-center/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html
Dave's nightmare is becoming a reality