GTC 2017
Tune in on Wednesday at 9am PST when our CEO Jensen Huang takes the stage at our flagship GPU developer event, #GTC17. https://www.facebook.com/NVIDIA/ http://www.gputechconf.com/ Earlier today, Nvidia announce Metropolis NVIDIA 16 hrs · Today we unveiled our Metropolis edge-to-cloud platform for video analytics to make cities smarter and safer. More than 50 NVIDIA AI city partners provide applications that use deep learning on GPUs, many on display at #GTC17. NVIDIA Paves Path to AI Cities with Metropolis Edge-to-Cloud Platform for Video Analytics Paving the way for the creation of AI cities, NVIDIA today unveiled the NVIDIA Metropolis™ intelligent video analytics platform. NVIDIANEWS.NVIDIA.COM [img]http://assets.hardwarezone.com/img/2017/05/nvidia-metropolis-sample-image.jpg[/img] [img]https://developer.nvidia.com/sites/default/files/styles/main_preview/public/akamai/blogs/images/NVIDIA-Metropolis-270x151.png?itok=TnccYd3c[/img] http://nvidianews.nvidia.com/news/nvidia-paves-path-to-ai-cities-with-metropolis-edge-to-cloud-platform-for-video-analytics?ncid=so-fac-ms-14014 Hmm, I wonder what George Orwell is thinking right now. God rest his soul
Tune in on Wednesday at 9am PST when our CEO Jensen Huang takes the stage at our flagship GPU developer event, #GTC17.

https://www.facebook.com/NVIDIA/

http://www.gputechconf.com/



Earlier today, Nvidia announce Metropolis

NVIDIA
16 hrs ·
Today we unveiled our Metropolis edge-to-cloud platform for video analytics to make cities smarter and safer. More than 50 NVIDIA AI city partners provide applications that use deep learning on GPUs, many on display at #GTC17.

NVIDIA Paves Path to AI Cities with Metropolis Edge-to-Cloud Platform for Video Analytics
Paving the way for the creation of AI cities, NVIDIA today unveiled the NVIDIA Metropolis™ intelligent video analytics platform.
NVIDIANEWS.NVIDIA.COM


Image

Image


http://nvidianews.nvidia.com/news/nvidia-paves-path-to-ai-cities-with-metropolis-edge-to-cloud-platform-for-video-analytics?ncid=so-fac-ms-14014


Hmm, I wonder what George Orwell is thinking right now. God rest his soul

#1
Posted 05/09/2017 07:40 AM   
Other creepy AI stuff https://news.developer.nvidia.com/artificial-intelligence-generates-christmas-song-from-holiday-image/ November 30, 2016 Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image. https://www.youtube.com/watch?v=38hxfC6M-pg _______________________________________________________________________________________________________ Recreate Any Voice Using One Minute of Sample Audio https://news.developer.nvidia.com/recreate-any-voice-using-one-minute-of-sample-audio/ April 27, 2017 2 Comments 547 Shares A Montreal-based startup developed a set of deep learning algorithms that can copy anyone’s voice with only 60 seconds of sample audio. Lyrebird, a startup spin-off from the MILA lab at University of Montréal and advised by Aaron Courville and Yoshua Bengio claims to be the first of its kind to allow copying voices in a matter of minutes and control the emotion of the generation. (head over to the sound bite at the link above, it has mimics of Obama and Trump) ______________________________________________________________________________________________________ Human-like Character Animation System Uses AI to Navigate Terrains https://news.developer.nvidia.com/human-like-character-animation-system-uses-ai-to-navigate-terrains/ May 2, 2017 Comments 227 Shares Researchers from University of Edinburgh and Method Studios developed a real-time character control mechanism using deep learning that can help virtual characters walk, run and jump a little more naturally. https://www.youtube.com/watch?v=Ul0Gilv5wvY
Other creepy AI stuff


https://news.developer.nvidia.com/artificial-intelligence-generates-christmas-song-from-holiday-image/



November 30, 2016

Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image.


_______________________________________________________________________________________________________





Recreate Any Voice Using One Minute of Sample Audio


https://news.developer.nvidia.com/recreate-any-voice-using-one-minute-of-sample-audio/


April 27, 2017
2 Comments 547 Shares
A Montreal-based startup developed a set of deep learning algorithms that can copy anyone’s voice with only 60 seconds of sample audio.

Lyrebird, a startup spin-off from the MILA lab at University of Montréal and advised by Aaron Courville and Yoshua Bengio claims to be the first of its kind to allow copying voices in a matter of minutes and control the emotion of the generation.

(head over to the sound bite at the link above, it has mimics of Obama and Trump)
______________________________________________________________________________________________________






Human-like Character Animation System Uses AI to Navigate Terrains


https://news.developer.nvidia.com/human-like-character-animation-system-uses-ai-to-navigate-terrains/


May 2, 2017
Comments 227 Shares
Researchers from University of Edinburgh and Method Studios developed a real-time character control mechanism using deep learning that can help virtual characters walk, run and jump a little more naturally.

#2
Posted 05/09/2017 07:58 AM   
Something very awesome that they have made possible with Jetson Helping the Blind Navigate the World Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired. To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at. https://www.youtube.com/watch?v=9TEJC5fXnu8 Saverio Murgia, Horus CEO and co-founder, was inspired to create his company two years ago after meeting a blind person on the street who asked for help finding a bus stop.
Something very awesome that they have made possible with Jetson

Helping the Blind Navigate the World

Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired.

To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at.



Saverio Murgia, Horus CEO and co-founder, was inspired to create his company two years ago after meeting a blind person on the street who asked for help finding a bus stop.

#3
Posted 05/09/2017 08:17 AM   
It also seems that Nvidia polished their site a little bit during the maintenance yesterday. Like over at the old driver page, the search function has been changed. http://www.nvidia.com/Download/index.aspx?lang=en-us And the results in some of the links have a lil more pizzazz https://blogs.nvidia.com/blog/tag/gtc-2017/ The technology drop down menu is no longer present on the global page. That may have changed sometime in the past, I can't say because I haven't been on the page in quite awhile. http://www.nvidia.com/page/home.html It sucks, because with the new banner and drop down menus, you can't navigate to the old 3D Vision requirements site like before :( http://www.nvidia.com/object/3d-vision-main.html
It also seems that Nvidia polished their site a little bit during the maintenance yesterday.

Like over at the old driver page, the search function has been changed.

http://www.nvidia.com/Download/index.aspx?lang=en-us

And the results in some of the links have a lil more pizzazz

https://blogs.nvidia.com/blog/tag/gtc-2017/

The technology drop down menu is no longer present on the global page. That may have changed sometime in the past, I can't say because I haven't been on the page in quite awhile.

http://www.nvidia.com/page/home.html

It sucks, because with the new banner and drop down menus, you can't navigate to the old 3D Vision requirements site like before :(

http://www.nvidia.com/object/3d-vision-main.html

#4
Posted 05/09/2017 08:34 AM   
Jen-Hsun Huang talked mostly about that Moore's Law no longer applies, Deep Learning and AI. He also talks mad crazy stuff about 7 XO Flops, 20 XO Flops, 105 XO Flops.......... Announced Telsa Volta V100 You can see a recording of the 2hr event at (scroll down a lil bit) http://wccftech.com/watch-nvidias-gtc-2017-keynote-volta-supercomputing-9-pt-may-10th/ He also talked about project Holodeck [img]https://edge.alluremedia.com.au/m/g/2017/05/SAM4171.gif[/img]
Jen-Hsun Huang talked mostly about that Moore's Law no longer applies, Deep Learning and AI.

He also talks mad crazy stuff about 7 XO Flops, 20 XO Flops, 105 XO Flops..........

Announced Telsa Volta V100

You can see a recording of the 2hr event at (scroll down a lil bit)

http://wccftech.com/watch-nvidias-gtc-2017-keynote-volta-supercomputing-9-pt-may-10th/

He also talked about project Holodeck

Image

#5
Posted 05/10/2017 06:49 PM   
NVIDIA stock jumped by almost 18% today, so the financial market likes something NVIDIA announced (probably Metropolis)...
NVIDIA stock jumped by almost 18% today, so the financial market likes something NVIDIA announced (probably Metropolis)...

#6
Posted 05/10/2017 09:19 PM   
That and their new server, with Tesla V100 GPUs https://devblogs.nvidia.com/parallelforall/inside-volta/ http://www.itworld.com/article/3196093/data-center/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html Dave's nightmare is becoming a reality https://www.youtube.com/watch?v=7qnd-hdmgfk

#7
Posted 05/10/2017 10:05 PM   
https://www.youtube.com/watch?v=54TK9xaNxDs

#8
Posted 05/17/2017 09:09 AM   
Scroll To Top