How We Made That App
Welcome to “How We Made That App,” where we explore the crazy, wild, and sometimes downright bizarre stories behind the creation of some of the world’s most popular apps, hosted by the always charming and devastatingly handsome Madhukar Kumar. After starting his career as a developer and then as a product manager, he is now the Chief Marketing Officer at SingleStore. And he’s here to take you on a journey through the data, challenges, and obstacles that app developers face on the road to creating their masterpieces. In each episode, we’ll dive deep into the origins of a different app and find out what went into making it the success it is today. We’ll explore the highs and lows of development, the technical challenges that had to be overcome, and the personalities and egos that clashed along the way. With a signature blend of irreverent humor, snarky commentary, and, razor-sharp wit, we’ll keep you entertained and informed as we explore the cutting edge of app development. So grab your favorite coding language and crank up the volume, and join us for ”How We Made That App,” brought to you by the top app-building platform wizards at SingleStore.
Episodes
14 hours ago
14 hours ago
In this episode of How We Made That App, host Madhukar Kumar, CMO of SingleStore, sits down with Igor Jablokov, the pioneering mind behind Amazon Alexa and CEO of Pryon. Igor shares his journey from the early days of voice recognition technology to leading innovations in artificial intelligence. Discover the story behind Alexa’s creation, the technical challenges and breakthroughs in building a voice-powered assistant, and the forward-thinking solutions that reshaped how we interact with technology. Igor also delves into the ethical considerations and governance of AI, tackling common misconceptions and offering a glimpse into the future of intelligent systems. Tune in to hear how a vision for voice technology evolved into a transformative force in AI.
Key Takeaways:
• Igor Jablokov’s journey from Alexa co-creator to CEO of Pryon
• The inception and development of Amazon Alexa
• Technical insights into overcoming challenges in voice recognition
• Ethical considerations and governance in the field of AI
• Future perspectives on the role of AI in everyday life
Subscribe now and don’t miss an episode of How We Made That App, where we explore the stories behind the most impactful apps and the innovators who make them happen.
Links
Connect with Igor
Visit Pryon
Connect with Madhukar
Visit SingleStore
Monday Aug 19, 2024
Monday Aug 19, 2024
In this episode of "How We Made That App," host Madhukar Kumar, CMO of SingleStore, sits down with Marc Locchi, the CTO of Flowd, a groundbreaking water management company from Australia. Marc shares his journey from a frustrated programmer to leading the development of Flowd, an app that’s transforming how businesses detect and manage water leaks in real-time. Discover the story behind Flowd’s creation, the technical challenges faced, and the innovative solutions employed, including the use of Laravel and AWS for scalability. Marc also delves into the impact Flowd has had on saving water and reducing costs for clients, sharing fascinating success stories along the way. Tune in to learn how a chance meeting and a passion for technology led to a solution that’s making waves in water conservation.
Key Takeaways:
• Marc Locchi's journey to becoming CTO of Flowd• The inception and development of Flowd• Technical insights on building a scalable application with Laravel• Real-world impact stories of Flowd’s water management solutions• Collaboration with AWS to enhance performance and scalability
Subscribe now and don’t miss an episode of "How We Made That App," where we explore the stories behind the most innovative apps and the people who make them happen.
Links
Connect with Marc
Visit Flowd
Connect with Madhukar
Visit SingleStore
Tuesday Apr 30, 2024
Tuesday Apr 30, 2024
Join us on this intriguing journey where host Madhukar Kumar uncovers the story of FlowiseAI, an AI-powered chatbot tool that soared to fame in the open-source community. Henry Heng, the Founder of FlowiseAI, shares the inception of FlowiseAI was out of the need to streamline repetitive onboarding queries. Listen in as Henry shares the unexpected explosion of interest following its open-sourcing and how community engagement, spearheaded by creators like Leon, has been pivotal to its growth. The conversation takes a fascinating turn with the discussion of Flowise’s versatility, extending to AWS and single store's creative uses for product descriptions, painting a vivid picture of the tool's expansive potential.Madhukar and Henry discuss the dynamic realm of data platforms, touching on the integration of large language models into developer workflows and the inevitable balance between commercial giants and open-source alternatives. Henry brings a personal perspective to the table, detailing his use of Fowise for managing property documentation and crafting an accompanying chatbot. Henry also addresses the critical issue of data privacy in enterprise environments, exploring how Flowwise approaches these challenges. The strategy behind monetizing Flowwise is also revealed, hinting at an upcoming cloud-hosted iteration and its future under the Y Combinator umbrella. Don't miss out on this insightful conversation on how FlowiseAI is revolutionizing GenAI!Key Quotes: “What I've experienced is that first you go through the architect. So the architect of companies and the senior teams as well will decide what architecture that we want to go with. And usually, I was part of the conversation as well. We tend to decide between, NoSQL or SQL depending on the use cases that we are using. For schema that are like fast changing schema or inconsistent, not like tabular structured data, we often use NoSQL or MongoDB. And for structured data, we use MySQL from my previous company. That's how we kind of like decide based on the use cases.”“Judging from the interactions that I have with the community, I would say 80 percent of them are using OpenAI and OpenSource is definitely catching up but is still lagging behind OpenAI. But I do see the trend that is starting to pick up, like especially you have the MixedRoute, you have Lama2 as well. But the problem is that I think the cost is still the major factor. Like, people tend to go to which large language models has the lowest cost, right?”Timestamps(00:00) Building FlowiseAI to open source(5:07) Innovative Use Cases of Flowwise(10:15) Types of users of Flowise(19:39) Database Architecture and Future Technology(32:30) Quick hits with HenryLinksConnect with HenryVisit FlowiseAIConnect with MadhukarVisit SingleStore
Tuesday Apr 16, 2024
Tuesday Apr 16, 2024
On this episode of How We Made That App, join host Madukar Kumar as he delves into the groundbreaking realm of AI in education with Dev Aditya, CEO and Co-Founder of the Otermans Institute. Discover the evolution from traditional teaching methods to the emergence of AI avatar educators, ushering in a new era of learning.Dev explores how pandemic-induced innovation spurred the development of AI models, revolutionizing the educational landscape. These digital teachers aren't just transforming classrooms and corporate training. They're also reshaping refugee education in collaboration with organizations like UNICEF.Dev’s deep dive into the creation and refinement of culturally aware and pedagogically effective AI. He shares insights into the meticulous process behind AI model development, from the MVP's inception with 13,000 lines of Q&A to developing a robust seven billion parameter model, enriched by proprietary data from thousands of learners.We also discuss the broader implications of AI in data platforms and consumer businesses. Dev shares his journey from law to AI research, highlighting the importance of adaptability and logical thinking in this rapidly evolving field. Join us for an insightful conversation bridging the gap between inspiration and innovation in educational AI!Key Quotes: “People like web only and app only, right? They like it. But in about July this year, we are launching alpha versions of our products as Edge AI. Now that's going to be a very narrowed down version of language models that we are working on right now, taking from these existing stacks. So that's going to be about 99 percent our stuff. And it's, going to be running on people's devices. It's going to help with people's privacy. Your data stays in your device. And even as a business, it actually helps a lot because I am hopefully, going to see a positive difference in our costs because a lot of that cloud costs now rests in your device.”“My way of dealing with AI is, is narrow intelligence, break a problem down into as many narrow points as possible, storyboard, storyboard color, as micro as possible. If you can break that down together, you can teach each agent and each model to do that phenomenally well. And then it's just an integration game. It will do better than a human being in, you know, as a full director of a movie. Also, if you know, if you, from the business logic standpoint, understand what does a director do, it is possible, theoretically. I don't think people go deep enough to understand what a teacher does, or what a doctor is just not a surgeon, right? How they are thinking, what is their mechanism? If you can break that down. You can easily make, like, probably there are 46, I'm just saying, 46 things that a doctor does, right? If you have 46 agents working together, each one knowing that,be amazing. That's a different game. I think agents are coming.”Timestamps(00:00) - AI Avatar Teachers in Education(09:29) - AI Teaching Model Development Challenges(13:27) - Model Fine-Tuning for Knowledge Augmentation(25:22) - Evolution of Data Platforms and AI(32:15) - Technology Trends in Consumer BusinessLinksConnect with DevVisit the Otermans Institute Connect with MadhukarVisit SingleStore
Tuesday Apr 02, 2024
Tuesday Apr 02, 2024
In this episode of How We Made That App, host Madhukar welcomes Jack Ellis, CTO and co-founder of Fathom Analytics, who shares the inside scoop on how their platform is revolutionizing the world of web analytics by putting user privacy at the forefront. With a privacy-first ethos that discards personal data like IP addresses post-processing, Fathom offers real-time analytics while ensuring user privacy. Breaking away from the traditional cookie-based tools like Google Analytics. Jack unpacks the technical challenges they faced in building a robust, privacy-centric analytics service, and he explains their commitment to privacy as a fundamental service feature rather than just a marketing strategy.Jack dives into the fascinating world of web development and software engineering practices. Reflecting on Fathom's journey with MySQL and PHP, detailing the trials and tribulations of scaling in high-traffic scenarios. He contrasts the robustness of PHP and the rising popularity of frameworks like Laravel with the allure of Next.js among the younger developer community. Jack also explores the evolution from monolithic applications to serverless architecture and the implications for performance and scaling, particularly as we efficiently serve millions of data points.Jack touches on the convergence of AI with database technology and its promising applications in healthcare, such as enhancing user insights and decision-making. Jack shares intriguing thoughts on how AI can transform societal betterment, drawing examples from SingleStore's work with Thorn. You don’t want to miss this revolutionizing episode on how the world of analytics is changing! Key Quotes: “When we started selling analytics people they were a bit hesitant to pay for analytics but over time people have started valuing privacy over everything And so it's just compounded from there as people have become more aware of the issues and people absolutely still will only use Google Analytics but the segment of the market that is moving towards using solutions like us is growing.”“People became used to Google's opaque ways of processing data. They weren't sure what data was being stored, how long were they keeping the IP address for. All of these other personal things as well. And we came along and we basically said, we're not interested in that. tracking person A around multiple different websites. We're actually only interested in person A's experience on one website. We do not, under any circumstances, want to have a way to be able to profile an individual IP address across multiple entities. And so we invented this mechanism where the web traffic would come in and we'd process it and we'd work out whether they're unique and whatever else. And then we would discard the personal data.”“The bottleneck for most applications is not your web framework, it's always your database and I ran through Wikipedia's numbers, Facebook's numbers and I said it doesn't matter, we can add compute, that's easy peasy, it's always the database, every single time, so stop worrying about what framework you're using and pick the right database that has proven that it can actually scale.”“If you're using an exclusively OLTP database, you might think you're fine. But when you're trying to make mass modifications, mass deletions, mass moving of data, OLTP databases seem to fall over. I had RDS side by side with SingleStore, the same cost for both of them, and I was showing people how quickly SingleStore can do stuff. That makes a huge difference, and it gives you confidence, and I think that you need a database that's going to be able to do that.”Timestamps(00:55) Valuing consumer’s privacy (06:01) Creating Fathom Analytics' architecture(20:48) Compounding growth to scale(23:08) Structuring team functions(25:39) Developing features and product design(38:42) Advice for building applicationsLinksConnect with JackVisit Fathom AnalyticsConnect with MadhukarVisit SingleStore
Tuesday Mar 19, 2024
Tuesday Mar 19, 2024
On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry. Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction. Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction!Key Quotes: “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”“We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”“You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking state and then you complete execution, right? And then you get back a response.And so that actually is a roughly general interface that we have like a base abstraction for.”“A lot of LLMs, more and more of them are supporting function calling nowadays.So under the hood within the LLM, the API gives you the ability to just specify a set of tools that the LLM API can decide to call tools for you. So it's actually just a really nice abstraction, instead of the user having to manually prompt the LLM to coerce it, a lot of these LLM providers just have the ability for you to specify functions under the hood and if you just do a while loop over that, that's basically an agent, right? Because you just do a while loop until that function calling process is done and that's basically, honestly, what the OpenAI Assistance agent is. And then if you go into some of the more recent agent papers you can start doing things beyond just the next step chain of thought into every stage instead of just reasoning about what you're going to do next, reason about like an entire map of what you're going to do, roll out like different scenarios get the value functions of each of them and then make the best decision And so you can get pretty complicated with the actual reasoning process that which then feeds into tool use and everything else.”Timestamps(1:25) Llamindex origins (5:45) Building LLM Applications with Lama Index(10:35) Finding patterns and fine-tuning in LLM usage(18:50) Keeping LlamaIndex in the open-source community(23:46) LlamaCloud comprehensive evaluation capabilities (31:45) The future of the modern data stack (40:10) Best practices when building a new application LinksConnect with JerryVisit LlamIndexConnect with MadhukarVisit SingleStore
Tuesday Feb 20, 2024
Tuesday Feb 20, 2024
In this engaging episode, host Madhukar Kumar dives deep into the world of data architecture, deployment processes, machine learning, and AI with special guest Premal Shah, the Co-Founder and Head of Engineering at 6sense. Join them as Premal traces the technological evolution of Sixth Sense, from the early use of FTP to the current focus on streamlining features like GitHub Copilot and enhancing customer interactions with GenAI.Discover the journey through the adoption of Hive and Spark for big data processing, the implementation of microservice architecture, and massive-scale containerization. Learn about the team's cutting-edge projects and how they prioritize product development based on data value considerations.Premal also shares valuable advice for budding engineers looking to enter the field. Whether you're a tech enthusiast or an aspiring engineer, this episode provides fascinating insights into the ever-evolving landscape of technology!Key Quotes: “What is important for our customers, is that 6sense gives them the right insight and gives them the insight very quickly. So we have a lot of different products where people come in and they infer the data from what we're showing. Now it is our responsibility to help them do that faster. So now we are bringing in GenAI to give them the right summary to help them to ask questions of the data right from within the product without having to think about it more or like open a support ticket or like ask their CSM.”“We had to basically build a platform that would get all of our customer's data on a daily basis or hourly basis and process it every day and give them insights on top of it. So, we had some experience with Hadoop and Hive at that time. So we used that platform as like our big data platform and then we used MySQL as our metadata layer to store things like who is the customer, what products are there, who are the users, et cetera. So there was a clear separation of small data and big data.”“Pretty soon we realized that the world is moving to microservices, we need to make it easy for our developers to build and deploy stuff in the microservice environment. So, we started investing in containerization and figuring out, how we could deploy it, and at that same time Kubernetes was coming in so with using docker and Kubernetes we were able to blow up our monolith into microservices and a lot of them. Now each team is responsible for their own service and scaling and managing and building and deploying the service. So the confluence of technologies and what you can foresee as being a challenge has really helped in making the transition to microservices.”“We brought in like SingleStore to say, ‘let's just move all of our UIs to one data lake and everybody gets a consistent view.’ There's only one copy. So we process everything on our hive and spark ecosystem, and then we take the subset of the process data, move it to SingleStore, and that's the customer's access point.”“We generally coordinate our releases around a particular time of the month, especially for the big features, things go behind feature flags. So not every customer immediately gets it. You know, some things go in beta, some things go in direct to production. So there are different phases for different features. Then we have like test environments that we have set up, so we can simulate as much as possible, uh, for the different integrations. Somebody has Salesforce, somebody has Mercado, Eloqua, HubSpot. All those environments can be like tested. ”“A full stack person is pretty important these days. You should be able to understand the concepts of data and storage and at least the basics. Have a backing database to build an application on top of it, able to write some backend APIs, backend code, and then build a decent looking UI on top of it. That actually gives you an idea of what is involved end to end in building an application. Versus being just focused on I only do X versus Y. You need the versatility. A lot of employers are looking for that.”Timestamps(00:23) Premal’s Background and Journey into Engineering(06:37) Introduction to 6sense: The Company and Its Mission(09:15) The Evolution of 6sense: From Idea to Reality(13:07) The Technical Aspects: Data Management and Infrastructure(18:03) Shifting to a micro-service-focused world(31:16) Challenges of Data Management and Scaling(38:26) Deployment Strategies in Large-Scale Systems(47:49) The Impact of Generative AI on Development and Deployment(55:18) The Future of AI in Engineering(01:01:07) Quick HitsLinksConnect with PremalVisit 6senseConnect with MadhukarVisit SingleStore
Tuesday Feb 06, 2024
Tuesday Feb 06, 2024
On this episode of How We Made That App, embark on a captivating journey into STEM education with host Madhukar Kumar and the Co-Founder of the brilliant app Numerade Alex Lee! Alex gives an in-depth and fascinating fusion of philosophy and technology that propels Numerade's innovative learning platform. Alex unveils the intricate layers of AI and machine learning models that power their educational ecosystem. Beyond the present, he explores the promising future integration of Large Language Models (LLMs), offering a glimpse into the next frontier of education.Numerade is more than just AI and LLM enhancements, Alex emphasizes the human touch woven into Numerade's approach. Discover the impact of meaningful interactions on the learning experience and the deliberate efforts to maintain a personal connection in the digital realm. Alex envisions growth by seamlessly aligning Numerade's services with the dynamic advancements in AI, creating a bridge between cutting-edge technology and genuine human engagement.Tune in as this episode unravels the philosophy, technology, and human-centric approach that define Numerade's quest to revolutionize STEM education.Key Quotes:“Our thesis has always been that when it comes to learning this complex material, it's so much more effective to be able to get sight and sound. So, to be able to sit down, have in front of you an expert educator, who's walking you through the video. Speaking to you, the high-level concepts, guiding you through the various skills that are required to be able to tackle that problem. That's what we found really, really constructive to the learning cycle for our students.”“One thing that we do also in the background, and this is where AI and machine learning comes in, is being able to create holistic end-to-end experiences that really stitch together a lot of these different videos to provide something that takes the student through the whole journey of learning something first at a conceptual level. So really building that knowledge foundation. So, for example, understanding what momentum even is. And then gradually as they're building that knowledge framework, we're giving them more discrete items and problems for them to solve. And that way we're doing more of that skill building aspect. So really honing in on how do you solve momentum questions and equations.”“When we were experiencing growth, the one thing that we realized was, sitting students down and conducting these more qualitative feedback sessions with them, getting them into focus groups. That wasn't really all that scalable. It works really, really well, and it's still something that we do today, but as the amount of traffic that you get on the site, as the number of users that begin interacting with our content, as that number starts to grow, there needs to be better ways for us to gain these deeper insights. And that's where we started the exploration of how do we best set a system up for ingesting all of the con, all of the data that's being created by our users during their time interacting with our site.”“Right now from all of the data that we're able to see, humans will, at least in the immediate future, still be a very much a part of the human experience in the learning experience. And I think the reason behind that is just because the learning experience is also inherently human. And you have to have some of that human element behind it to really effectuate great learning.”Timestamps(1:08) Numerade’s origins(5:00) Expanding education in the TikTok era(11:10) Building Numerade through existing technologies(17:05) Using product-led growth to expand Numerade(21:05) Using LLMs and AI to utilize Numerade(29:35) Quick HitsLinksConnect with AlexVisit NumeradeConnect with MadhukarVisit SingleStore
Tuesday Jan 23, 2024
Tuesday Jan 23, 2024
Discover the groundbreaking potential of a Squirrel Bot (SQrL) in transforming your interactions with JIRA users! Join us in this episode as we sit down with Dave Eyler, Senior Director of Product Management at SingleStore. From his journey as a software engineer to a product manager, Dave takes us through the evolution of his career.Tune in and explore the limitless possibilities of SingleStore's databases, delving into their incredible versatility across various use cases. Brace yourself for a mind-blowing exploration of vector analysis queries and their exceptional performance in AI use cases. We also delve into the future of application development, uncovering how AI technology can elevate applications and the potential of text interfaces in managing complex applications.In the final stretch, we navigate the impact of AI and machine learning on the industry, unraveling the dramatic shifts within software teams. Dave candidly shares insights on the allure of product management, shedding light on why it magnetizes engineers. Don't miss this remarkable episode filled with insights, experiences, and a touch of humor – a journey into the transformative power of AI and the future of databases!Key Quotes:“People rag on JIRA. Jira is like the ultimate Swiss Army knife - it's not good at anything but it can be made to do anything, and I think there is power in that. So people make fun of JIRA, but JIRA is actually, I think, a pretty impressive piece of software if you overlook the maddening nature of it sometimes.”“This is just not a thing that is out there. So we solve this in this need that is growing bigger and bigger and bigger. I want to have real time analytics. I want to do real time operations. I wanna make smart decisions. As you know, data grows and companies get smarter about data and their operations,like this is only getting increased. And so the thing I love about SingleStore is it's actually an incredibly differentiated product and solves a real need for our customers. So that's definitely one of my favorite things about the job.”“We're converging now to where it's like, customers, they don't want to have a million databases. They want to have the smallest number of databases they can and serve all their use cases. And that's really the power of SingleStore, this vector analysis use case is not like, we had to go build a ton of stuff to make it work. We already had it because we're a super powerful database, shared nothing distributed and just purpose built for speed. “Timestamps(00:38) Intro(09:47) The Future of Databases and AI Applications(13:20) Product Management and Application Evolution(20:57) Customer Feedback, AI, and Product Management(29:13) Parents' Reaction to Children's Career ChoicesLinksConnect with DavidConnect with MadhukarVisit SingleStore
Tuesday Jan 09, 2024
Tuesday Jan 09, 2024
In this episode, we delve into a fascinating conversation with Marcus O'Brien, VP of Product, AutoCAD, at Autodesk, focusing on the evolution of AutoCAD. Marcus takes us on a journey, discussing how AutoCAD has evolved since the 1980s, establishing itself as a go-to tool for architects, engineers, and designers worldwide. From creating 2D and 3D objects to evolving into an extensible platform, Marcus shares insightful details about the product and its wide range of applications.We also have the opportunity to hear Marcus's extraordinary personal journey from Ireland to America, and his transition into product management. Marcus enlightens us on the evolution of product management, discussing the industry's macro shifts that have influenced products and the strategies to enter product management today. Additionally, he shares his thoughts on what makes a great product manager, how AI and ML are utilized in product management and his experience in building models for Autodesk's products.We conclude the episode by exploring the use of LLMs in 3D modeling and design, along with the capabilities of AutoCAD products in generative design. Marcus offers insights into onboarding customers and highlights the available tools for individuals interested in learning 3D modeling and design. Tune in to this insightful episode to learn from an industry expert and explore the world of AutoCAD and product management.Key Quotes: I think going through the technical route and then getting into product management later is a really strong foundation in being able to understand some technical engineering concepts, and then you can kind of scale yourself, learn a bit about strategy but be rooted in the technical side, I think is one of the things that makes you really successful but I think when I look at founders, if I look at all the VC investment that's happening at the moment. It's for a more technical founder base. So I think the wild west of, you can just go to a VC and you've got a business plan and you can talk the talk.I think those days might be over now, unless your company name ends in AI. But there tends to be more of a technical bias to these positions now, so I think anyone coming in with a technical background and then switching to PM, it's a good route.When I look at AutoCAD's journey, the first 20 years was about building automations on desktop software. Maybe, after 20 years for the next 10 years was about acquiring vertical products or building vertical products and bringing them to market to target specific niches. From 2010, maybe to 2018 was more about multi-platform about creating AutoCAD that is truly everywhere where it's desktop, web, mobile. We've got AutoCAD design automation API in the cloud, so that if you want to run automations, if you don't want to do use your GPU, if you want to do things online with servers, we've developed this full third party ecosystem of developers who develop capabilities on top of AutoCAD. I think that was the kind of push and certainly this last number of years for PMs, it's been about machine learning and AI.I think you need to learn it on the job, if I'm honest, maybe I'm a bit old school like that. I would push back on the ego and I actually think the most successful product managers are humble. And I think that is one of the qualities you look for. You want table stakes. You need the smartest person, you know, super smart people. My personal preference is a strong bias for action. So somebody who doesn't have to have the idea, but as the person who wants to get traction and make progress with the idea, incredible communication skills, both written and verbal, you have to be one of those people who just enjoys it. I think if your company is solely reliant on LLMs to check your AI ML capabilities, you're probably missing a beat.I think the companies that are looking at more broadly beyond LLMs, maybe have a little bit more strategic advantage and more value to offer to customers ultimately.I think that the way that I raise my kids needs to be different now, because I need them to be comfortable with working with AIs.I think that that's going to be their childhood. They're going to grow up with AIs. I think we have a role to play in teaching our kids how to get the best from AI in the way that we had to learn how to use iPads. They're going to have to learn how to work with AIsTimestamps(1:45) - The journey of AutoCAD(7:56) - Marcus’ journey from Ireland to America (12:05) - Taking the technical route to product management (19:20) - Bringing GenAI and product management together(25:26) - LLMs in 3D Modeling and Design (29:56) - Goal Setting and Adapting in Product Management(37:15) - Quick hitsLinksConnect with MarcusCheck out the AutoCAD podcastCheck out the Figuring Things Out podcastConnect with MadhukarVisit SingleStore
Your Title
This is the description area. You can write an introduction or add anything you want to tell your audience. This can help potential listeners better understand and become interested in your podcast. Think about what will motivate them to hit the play button. What is your podcast about? What makes it unique? This is your chance to introduce your podcast and grab their attention.