The NVIDIA Jetson line includes the company’s proprietary Orin-referenced modules that will be used in all NVIDIA Jetson models, from the Jetson Orin Nano to the Jetson AGX Orin, allowing more clients to scale their projects with ease using the company’s Jetson AGX Orin Developer Kit. As AI evolves, requiring immediate processing in several industries, NVIDIA recognizes the demands for high-level edge computing with lower-latency levels while remaining efficient, inexpensive, and more minute. The company intends to sell Jetson Orin Nano series production modules starting January 2023, beginning at the entry-level price of $199. The new modules will produce up to 40 TOPS of performance in AI workloads in a Jetson SFF module, allowing for power consumption levels between 5W to 15W and in two different memory sizes — the Jetson Orin Nano 4GB and 8GB versions. NVIDIA Jetson Orin Nano utilizes the Ampere-based GPU, along with eight streaming multiprocessors containing 1,024 CUDA cores and 32 Tensor Cores, which will be used for processing artificial intelligence workloads. The Ampere-based GPU Tensor Cores offer improved performance per watt support for sparsity, allowing for twice the Tensor Core throughput. The Jetson Orin Nano will also offer the six-core Arm Cortex A78AE processor onboard and various support in a video decode engine and image compositor, ISP, an audio processing engine, and a video input block.
As many as seven PCIe Gen3 lanes Three 10Gbps USB 3.2 Gen2 connection ports Eight MIPI CSI-2 camera port lanes Numerous sensor inputs and outputs
Another beneficial improvement of the Jetson Orin Nano and Jetson Orin NX modules is their form factor and pin-compatibilities. The developer kit for the Jetson AGX Orin and the remainder of the Jetson Orin series will be able to emulate each of the various modules used to get developers started toward working in the newest environment with the use of NVIDIA JetPack. As you can see in the following two charts, the Jetson Orin Nano series was pitted against its predecessors in some high-end HPC AI workloads to show the difference in power and efficiency. The first table shows the FPS difference in generations, while the second bar graph shows the AI Performance Inference per second of the four groups tested. Readers will notice that the 8GB shows thirty times better increases in performance, and NVIDIA states that they plan to improve off that amount and shoot for forty-five times better performance in the future. Various frameworks will be available for developers, including:
NVIDIA Isaac (robotics) NVIDIA DeepStream (vision AI) NVIDIA Riva (conversational AI) NVIDIA Omniverse Replicator (Synthetic Data Generation, or SDG) NVIDIA TAO Toolkit (optimizing pre-trained AI models)
For those developers wishing to learn more about the toolkit, please visit the Jetson AGX Orin Developer Kit page to find out more and the resources available. News Source: NVIDIA Developer blog