Join us for the latest episode of our EE Times Current podcast, where we delve into the fascinating world of AI and Connectivity at the Edge solutions. Kaushal Vora and Mo Dogar from Renesas are our special guests for this episode. Together, we’ll discuss the crucial hardware and software components required to implement this cutting-edge technology. We unravel the complex challenge of how these components seamlessly fit together and stay tuned as we explore real use cases such as computer vision, real-time analytics, and so much more.
[FULL TRANSCRIPT BELOW]
Eric Singer (ES)  You are listening to EETimes OnAir, and this is EETimes Current. Iâm Eric Singer. This episode is sponsored by Renesas, complete semiconductor solutions to enhance the way people work and live. Today Iâm joined by Kaushal Vora and Mo Dogar from Renesas to discuss AI and connectivity at the edge and end point. But first, todayâs EETimes Current highlights. 3D NAND canât change the laws of physics. With octane moth balls and emerging memory still surfacing, the gap between 3D NAND flash and DRAM persists. Understanding the big spend on advanced packaging facilities. Leading chip makers in recent years spent tens of billions of dollars on advanced chip packaging facilities. Sandbox using AI and hybrid metrology to cut costs and boost yields. The hybrid metrology tool promises to improve metrology accuracy for edge and deposition steps, ultimately reducing process technology development costs.
ES:Â Â Â Â Â Find all these stories and more on eetimes.com. And if youâre on this episodeâs webpage, there are direct links to these articles.
AI at the edge is no longer something down the road, or even leading edge technology. Some consider it to be mainstream. But that doesnât make it any less complex. Weâre joined today by Kaushal Vora and Mo Dogar to discuss hardware and software components that are required to implement AI at the edge and how those various components get pieced together. Weâll also discuss some very real use cases spanning computer vision, voice, and real time analytics, or non-visual sensing. Kaushal is senior director for business acceleration and global ecosystem at Renesas Electronics. With over 15 years of experience in the semiconductor industry, Kaushalâs worked in several technology areas, including healthcare, telecom infrastructure, and solid state lighting. At Renesas, he leads a global team responsible for defining and developing IOT solutions for the companyâs microcontroller and microprocessor product lines, with a focus on AI and ML, cybersecurity, functional safety, and connectivity, among other areas. Kaushal has an MSEE from the University of Southern California. Kaushal, thanks for joining us.
KV:Â Â Â Â Pleasure is mine. Always happy to be in such good company.
ES:Â Â Â Â Â And welcome to Mo Dogar, who is head of global business development and technology ecosystem for Renesas, responsible for promotion and business expansion of the complete microcontroller portfolio and other key products. Heâs instrumental in driving global business growth and alignment of marketing, sales, and development strategies, particularly in the field of IOT, EAI, security, smart embedded electronics, and communications. In addition, Mr. Dogar helps provide the vision and thought leadership behind product and solution development, smart society, and the evolving IOT economy. Mo, thanks so much for joining us.
MD:Â Â Â Â Itâs great to be here. Thank you for having us.
ES:Â Â Â Â Â So we are super excited to have you both on today because of obviously the tremendous hype around AI right now. Itâs everywhere. Can you demystify some of that? And if you would, Iâd love to start by talking about the difference between generative AI, things like ChatGPT that almost everyone is familiar with these days, and predictive AI.
MD:Â Â Â Â Yeah. Iâll kick it off. What a great time to be in technology right now, or actually in the world we live in, right? So AI is certainly everywhere, and I would say, actually itâs a bit more than a hype in some cases. So your questionâs great. How do we differentiate? What is the real distinguish between generative AI and predictive AI? So the generative AI is all about creating new content, right? Itâs about adding value for people in a way to save time. And on the other hand, if you look at predictive AI, itâs about analyzing and making predictions. And most of the case time youâre talking about those intelligent end point or edge devices, whether theyâre in our homes or factories, or wherever they happen to be. Weâre collecting data all the time. You know, we live in this world thatâs full of sensors. So really, there are two different types that really face, then creating huge opportunity for us. When you talk about generative AI, youâre talking about text or audio or video, and itâs really helping to accelerate some of these content that is needed. And it kind of leverages on foundation models and transformers. But on the other hand, if you look at the edge AIs, typically running on these resource constrained devices thatâs collecting data, and they have to make decisions on real time in some cases, and give feedback to the system or the network. And literally, you can imagine billions of these endpoint devices out there are actually collecting data and making decisions as well. What we are also seeing, if you look at it from a market perspective, I mean, huge opportunity out there. If you look at generative AI, some of the market researchers predict anywhere, if you look at around 2030, 190 billion dollar worth of market being predicted. On the edge AI, itâs closer to maybe 6 to 100 billion. So itâs really significant. What I would also add is that probably the edge AI is a bit more mature compared to generative AI, but really, the scale of acceleration and adoption is phenomenal. So I think exciting times to be in the world of technology, and seeing AI to really add value to our lives and the technology at large.
KV:Â Â Â Â Yeah, excellent points. Just to add on what Mo said, right, the two types of AI are very different technically. Generative AI tends to be running in data centers and in the cloud. It uses tera-ops of performance, and it uses gigabytes of memory storage. These are extremely large models that have been generalized to solve more general purpose problems, like understanding language, understanding video, understanding images, right? One of the challenges we see with generative AI, although thereâs a lot of hype around it because of how consumerized it has become, is, you know, can we scale generative AI sustainably? Because itâs extremely power hungry, and itâs extremely resource hungry. As Mo said, edge AI is definitely more mature. There is a lot of use cases on the edge that tend to be a lot more real time, that tend to be a lot more constrained, that can leverage predictive AI. And I think a balance between both the generative AI and predictive AI would eventually be where people settle a few years from now thatâs just a prediction based on some of the things that weâre seeing. But absolutely, as Mo mentioned, from an edge and end point perspective, weâre seeing tremendous traction. Weâre seeing traction across, whether itâs [?7:28] interfacing with voice, whether itâs environmental sensing, you know, whet