Skip to content

Bringing Artificial Intelligence to Devices Everywhere

  • mail
It is an intuitive image that shows that all devices contain artificial intelligence.
It is an intuitive image that shows that all devices contain artificial intelligence.

As we recently discussed in our article Advancing the Era of Artificial Intelligence with Innovative Semiconductor Technology, the industry has reached a pinnacle point when it comes to innovation in artificial intelligence (AI). Technological progress across in-memory computing and parallel processing is finally catching up with and able to handle the bandwidth-intensive workloads driven by advanced deep learning and AI, ushering in the next era of AI innovation for devices across the ecosystem. We previously explored the importance of in-memory computing and the need for high-bandwidth memory (HBM) interfaces and DRAM solutions for advanced data processing. The other important piece of the puzzle is a powerful processor—one that can provide parallel processing to handle advanced data analysis. Through enhanced speed and performance, next-generation processors can accelerate the future of deep learning, computer vision, natural language processing and other AI applications. Bringing AI Innovation to Personal Devices Not only does AI have the potential to transform myriad industries, ranging from healthcare, to manufacturing, to retail and more, AI also promises to enrich our experiences with the devices around us. In Advancing the Era of Artificial Intelligence with Innovative Semiconductor Technology , we explained how server technology has fallen behind due to the massive influx in data we are experiencing, and how in-memory technology helps solve the bottleneck by increasing data indexing and transaction speeds. Similarly, the potential transformation for personal mobile devices is nearly limitless. Deep learning and AI promise an exciting range of new applications, from intelligent personal assistants, to smart speakers, to language translation and AI photo filters. Looking beyond personal mobile devices, there is also a huge opportunity to leverage AI when it comes to the Internet of Things (IoT). Consumers have embraced personal voice assistants over the past several years – including Amazon Alexa, Apple Siri, Google Assistant, Microsoft Cortana and others – and the industry is now starting to apply personal assistant technology to the connected home, delivering orders and requests to home appliances ranging from lights to stereo systems, TVs to refrigerators. As our homes become more connected – with processing power and connectivity integrated into all the things around us – it means that not only are we generating even more data than ever before, but we are also opening up greater possibilities when it comes to harnessing the power of that data for customized intelligence. Samsung Advanced Memory Solutions for AI We have well established the need for advanced memory when it comes to bandwidth-intensive AI applications, and Samsung is leading the way in providing the industry with Universal Flash Storage (UFS) and Low Power DDR4X (LPDDR4X). By leveraging advanced sensing capabilities to identify and process user patterns, these innovative memory technologies can provide even more customized personal assistance to mobile users. Leading the global mobile device DRAM market, Samsung’s LPDDR4X enables the next generation of ultra-slim mobile devices. With the industry’s highest speeds, LPDDR4X can handle the intense requirements of AI and deep learning, supporting faster multitasking, higher capacities and lower power consumption to ultimately drive the best user experiences. Samsung’s LPDDR4X comes in a slim form factor to extend its performance and speed to IoT devices, processing data across the connected home and mobile, and bridging the gap for smarter, inventive possibilities across devices. Samsung’s high-speed UFS advances the AI industry by providing the highest density solution – up to 512GB – for 64-layer V-NAND Flash. With ultra-fast speeds, UFS can deliver the groundbreaking performance needed to search through multiple images simultaneously for AI photo filtering, store 4K and 8K multimedia content, and power a range of augmented reality (AR) and virtual reality (VR) devices. The promise of an IoT connected home is closer to reality with UFS memory technology collecting, storing and processing data across automotive and mobile solutions, as well as multi-lens devices like drones and action cameras. Both LPDDR4X and UFS leading memory technologies can store and process collected data for fast, more secure services through Samsung Exynos processors, enabling the next era of on-device AI and security solutions. By leveraging advanced sensing technology and deep learning to understand user patterns, the personalized assistants of tomorrow will be even more customized, autonomous and sophisticated. Samsung Exynos Processor for On-Device AI As more of our personal devices become connected and we continue to generate more data, AI and deep learning can process and analyze these valuable data insights in order to improve our lives. However, there are significant technical challenges that need to be overcome in order to create an efficient deep learning environment across a variety of AI experiences in a mobile setting. Enter Samsung Exynos: a processor specifically designed for deep learning, in order to implement an AI environment anywhere, anytime. Samsung realized years ago that in order to efficiently meet the intensive demands of deep learning, the industry needed new technology that implants AI model into a mobile device for lower latency, better power efficiency, and stronger security when compared to utilizing AI model in cloud. For on-device AI, the Exynos 9 series 9820 processor features neural processing unit (NPU) to deliver deep learning processing capabilities paired with premium features, including a 4th generation custom CPU and faster multi-gigabit LTE modem. By managing massive amounts of data on-device at low power, neural processing unit significantly reduces the time required for deep learning. With sophisticated image processing technologies and strong security, the Exynos 9820 processor’s AI capabilities enable a mobile device to execute advanced applications, such as accurately identifying images in photos for efficient search and organization or scanning a user’s face in 3D using depth-sensing technology for hybrid face detection. Samsung Semiconductor is accelerating the digital transformation to AI by upgrading image recognition, language processing and data analysis across device types. With solutions like LPDDR4X, UFS and Exynos processors, Samsung is heralding a more efficient deep learning environment for AI experiences across mobile and IoT. As the industry continues to come together and develop technology that can handle the influx of complex, data-intensive processes that AI promises, we can unlock the potential to bring greater innovation to AI and continue to transform the way we work, live and play.

Would you like to
leave this page?
If you leave this page, the content you are creating
will not be saved.

Registration Are you sure you want to submit this?

Thank you! Please confirm your registration

Your subscription is not active yet!
An email with an activation link
has just been sent to your email address.
Please activate your subscription by clicking on
the activation link inside the email.

Confirm
Thank you! Please confirm

your existing registration

You have already registered, but before we can send you the
information about upcoming events, we need your confirmation.

If you missed our previous email, please use the button below to resend it.
To activate your subscription, please click on the link included in the email.

Resend
Alert

To proceed, please click on the "check" button located in the email section.

Confirm