The leading enterprise in the AI field is holding the Jetson Developer Challenge (Address https://challengerocket.com/nvidia/). Jetson is a high-performance low-power embedded computing platform that can be used in AI applications related to robotics and unmanned vehicles. Participants can participate in the name of individual or group, the number of the team can not exceed five people. Participants are required to submit the AI entries created using Jetson by February 19, 2018. Nvidia will select 10 finalists from all of the submissions, and they can continue with the next stage of work and be invited to the Silicon Valley GTC Conference to demonstrate their work. The 10 finalists have the opportunity to win the 10,000 dollar cash jackpot, nvidia TITAN Xp top graphics, nvidia Jetson TX2 Developer Suite, nvidia depth learning Trainee program and more.
Entries and winners will be announced on March 29, 2018 via the website: https://contests.nvidia.com/en-us/winners.
Why do you need high-performance low power embedded computing devices to develop AI applications? Siri, Google Assistant, Alexa and Cortana are not mysterious, they are backed by a huge computational power, but local computing power is not enough to meet demand, so these voice assistants need to network, in fact, you send the instructions or voice in the cloud processing. The cloud is unfathomable, but not all AI applications are suited to the way the cloud works, robots and unmanned vehicles face a much more diverse environment, requiring immediate response, while applications such as speech recognition delay hundreds of milliseconds or even seconds without serious consequences, But the delay in the target recognition of robots and unmanned vehicles is a high price to pay for life. This means that robots and unmanned vehicles require higher local computing power. Local High-performance computing is just one aspect of the problem, and you need to consider the volume of power consumption. You can of course use 16 core CPUs such as Intel core I9 and AMD Ryzen Threadripper to make up a large local computing cluster, but it also means that your battery capacity may be depleted in a few minutes, and mobile devices will not have the power to continue to consume. To synthesize these factors, mobile AI scenarios such as robots require High-performance, low-power embedded computing devices.
Jetson’s platform is also widely used among scientists studying high-end AI applications. You can find some examples on the pre-printed website ArXiv. Three researchers at the University of South Dakota published their findings, using the Jetson TX1 of Nvidia, “real-time Robot Localization, Vision, and Speech recognition”. Jetson TX1 includes 4-core ARM cortex-a57, 4GB LPDDR4 and 256-core Maxwell GPU, power consumption of 10 watts, which can provide 1TFLOPS FP16 computing performance. The Jetson TX2 released earlier this year provides twice the performance of TX1. Detailed comparison see the following figure:
If you are a student in school or a teaching staff, you can purchase Jetson TX2 by educational discounts HTTP://WWW.NVIDIA.CN/OBJECT/JETSONTX2-EDU-DISCOUNT-CN.HTML
If you are a developer, you can purchase Jetson through a special edition of developer discounts TX1 http://www.nvidia.cn/object/jetsontx1developerkitse-cn