logo
logo
logo
logo
Manuel Scarborough 2018-03-27
img

Nvidia announced today that it has launched a number of efforts to speed deep learning inferencing, a form of logical reasoning that is critical for artificial intelligence applications.

Some of its advances will be able to cut data center costs by up to 70 percent, and its graphics processing unit (GPU) will be able to perform deep learning inferencing up to 190 times faster than central processing units (CPUs).

In the past five years, programmers have made huge advances in AI, first by training deep learning neural networks based on existing data.

Nvidia’s efforts are aimed at improving inferencing while slashing the cost of deep learning-powered services, said Jensen Huang, CEO of Nvidia, in a keynote speech at the GTC event in San Jose, California.

Thanks to these improvements, tech companies are making strides in speech recognition, natural language processing, recommendation systems, and image recognition.

“We are experiencing a meteoric rise in GPU accelerated computing,” said Ian Buck, vice president and general manager of accelerated computing at Nvidia, in a press event.

collect
0
Michael Hurlock 2019-03-06
img

The world’s most popular open source framework for machine learning is getting a major upgrade today with the alpha release of TensorFlow 2.0.

Created by the Google Brain team, the framework is used by developers, researchers, and businesses to train and deploy machine learning models that make inferences about data.

A full release is scheduled to take place in Q2 2019.

The news was announced today at the TensorFlow Dev Summit being held at the Google Event Center in Sunnyvale, California.

Since the launch of TensorFlow in November 2015, the framework has been downloaded over 41 million times and now has over 1,800 contributors from around the world, said TensorFlow engineering director Rajat Monga.

A number of APIs seen as redundant — such as the Slim and Layers APIs — will be eliminated.

collect
0
Thomas Owens 2016-09-29
img

Video recognition could give robots the equivalent of a human eye and allow them to do mundane tasks like laundry.

Computers can already recognize you in an image, but can they see a video or real-world objects and tell exactly what's going on?

Researchers in and outside of Google are making progress in video recognition, but there are also challenges to overcome, Rajat Monga, engineering director of TensorFlow for Google's Brain team, said during a question-and-answer session on Quora this week.

For example, a computer will be able to identify a person's activities, an event, or a location.

Video recognition is akin to human vision, where we see a stream of related images, recognize objects immediately, and identify what's going on around us.

Many gains in video recognition have come, thanks to advances in the deep-learning models driving image recognition.

collect
0
Everett Toliver 2016-08-18
img

Earlier this month the company announced that it was tweaking its algorithms to cut down on clickbait —the ubiquitous plague of Internet content that over-promises and under-delivers.

The company has released AI algorithms, a tool for spotting bugs in code , and designs for AI-optimized hardware.

But given Amazon s core business, it s not surprising that the online retailer s version is devoted to selling merchandise.

TensorFlow technical lead Rajat Monga explains that the delay in releasing a multi-server version of TensorFlow was due to the difficulties of adapting the software to be usable outside of Google s highly customized data centers.

Our software stack is differently internally from what people externally use, he says.

The TensorFlow team opted to release a more limited version last year just to get something into researchers hands while continuing to work on more advanced features.

collect
0
George Starling 2019-09-30
img

Google open source machine learning library TensorFlow 2.0 is now available for public use, the company announced today.

The alpha version of TensorFlow 2.0 was first made available this spring at the TensorFlow Dev Summit alongside TensorFlow Lite 1.0 for mobile and embedded devices, and other ML tools like TensorFlow Federated.

TensorFlow 2.0 comes with a number of changes made in an attempt to improve ease of use, such as the elimination of some APIs thought to be redundant and a tight integration and reliance on tf.keras as its central high-level API.

Initial integration with the Keras deep learning library began with the release of TensorFlow 1.0 in February 2017.

It also promises three times faster training performance when using mixed precision on Nvidia’s Volta and Turing GPUs, and eager execution by default means the latest version of TensorFlow delivers runtime improvements.

The TensorFlow framework has been downloaded more than 40 million times since it was released by the Google Brain team in 2015, TensorFlow engineering director Rajat Monga told VentureBeat earlier this year.

collect
0
David Bierman 2019-03-06
img

Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices.

Quantization has led to 4 times compression of some models.

“We are going to fully support it.

Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year.

The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.

Other changes on the way:

collect
0
Howard Marsh 2019-01-31
img

Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors.

The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab.

The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list.

It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

“Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab.

Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century's worth of three-hour forecasts for Earth’s atmosphere.

collect
0
Manuel Scarborough 2018-03-27
img

Nvidia announced today that it has launched a number of efforts to speed deep learning inferencing, a form of logical reasoning that is critical for artificial intelligence applications.

Some of its advances will be able to cut data center costs by up to 70 percent, and its graphics processing unit (GPU) will be able to perform deep learning inferencing up to 190 times faster than central processing units (CPUs).

In the past five years, programmers have made huge advances in AI, first by training deep learning neural networks based on existing data.

Nvidia’s efforts are aimed at improving inferencing while slashing the cost of deep learning-powered services, said Jensen Huang, CEO of Nvidia, in a keynote speech at the GTC event in San Jose, California.

Thanks to these improvements, tech companies are making strides in speech recognition, natural language processing, recommendation systems, and image recognition.

“We are experiencing a meteoric rise in GPU accelerated computing,” said Ian Buck, vice president and general manager of accelerated computing at Nvidia, in a press event.

Thomas Owens 2016-09-29
img

Video recognition could give robots the equivalent of a human eye and allow them to do mundane tasks like laundry.

Computers can already recognize you in an image, but can they see a video or real-world objects and tell exactly what's going on?

Researchers in and outside of Google are making progress in video recognition, but there are also challenges to overcome, Rajat Monga, engineering director of TensorFlow for Google's Brain team, said during a question-and-answer session on Quora this week.

For example, a computer will be able to identify a person's activities, an event, or a location.

Video recognition is akin to human vision, where we see a stream of related images, recognize objects immediately, and identify what's going on around us.

Many gains in video recognition have come, thanks to advances in the deep-learning models driving image recognition.

George Starling 2019-09-30
img

Google open source machine learning library TensorFlow 2.0 is now available for public use, the company announced today.

The alpha version of TensorFlow 2.0 was first made available this spring at the TensorFlow Dev Summit alongside TensorFlow Lite 1.0 for mobile and embedded devices, and other ML tools like TensorFlow Federated.

TensorFlow 2.0 comes with a number of changes made in an attempt to improve ease of use, such as the elimination of some APIs thought to be redundant and a tight integration and reliance on tf.keras as its central high-level API.

Initial integration with the Keras deep learning library began with the release of TensorFlow 1.0 in February 2017.

It also promises three times faster training performance when using mixed precision on Nvidia’s Volta and Turing GPUs, and eager execution by default means the latest version of TensorFlow delivers runtime improvements.

The TensorFlow framework has been downloaded more than 40 million times since it was released by the Google Brain team in 2015, TensorFlow engineering director Rajat Monga told VentureBeat earlier this year.

Howard Marsh 2019-01-31
img

Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors.

The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab.

The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list.

It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

“Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab.

Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century's worth of three-hour forecasts for Earth’s atmosphere.

Michael Hurlock 2019-03-06
img

The world’s most popular open source framework for machine learning is getting a major upgrade today with the alpha release of TensorFlow 2.0.

Created by the Google Brain team, the framework is used by developers, researchers, and businesses to train and deploy machine learning models that make inferences about data.

A full release is scheduled to take place in Q2 2019.

The news was announced today at the TensorFlow Dev Summit being held at the Google Event Center in Sunnyvale, California.

Since the launch of TensorFlow in November 2015, the framework has been downloaded over 41 million times and now has over 1,800 contributors from around the world, said TensorFlow engineering director Rajat Monga.

A number of APIs seen as redundant — such as the Slim and Layers APIs — will be eliminated.

Everett Toliver 2016-08-18
img

Earlier this month the company announced that it was tweaking its algorithms to cut down on clickbait —the ubiquitous plague of Internet content that over-promises and under-delivers.

The company has released AI algorithms, a tool for spotting bugs in code , and designs for AI-optimized hardware.

But given Amazon s core business, it s not surprising that the online retailer s version is devoted to selling merchandise.

TensorFlow technical lead Rajat Monga explains that the delay in releasing a multi-server version of TensorFlow was due to the difficulties of adapting the software to be usable outside of Google s highly customized data centers.

Our software stack is differently internally from what people externally use, he says.

The TensorFlow team opted to release a more limited version last year just to get something into researchers hands while continuing to work on more advanced features.

David Bierman 2019-03-06
img

Google today introduced TensorFlow Lite 1.0, its framework for developers deploying AI models on mobile and IoT devices.

Quantization has led to 4 times compression of some models.

“We are going to fully support it.

Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year.

The TensorFlow Lite team at Google also shared its roadmap for the future today, designed to shrink and speed up AI models for edge deployment, including things like model acceleration, especially for Android developers using neural nets, as well as a Keras-based connecting pruning kit and additional quantization enhancements.

Other changes on the way: