Monday, February 9, 2026
HomeProgrammingYou want high quality engineers to show AI into ROI

You want high quality engineers to show AI into ROI


Pete Johnson, Field CTO, Artificial Intelligence at MongoDB, joins the podcast to talk about a recent OpenAI paper on the impact that AI will have on jobs and overall GDP. Pete, who reads the papers (and datasets) so you don’t have to, says that looking at AI’s impact as a job killer is a flawed metric. Instead, he and Ryan talk about how AI will be a collaborator for actual human workers, how embeddings and vectorization will move the productivity needle, and the five decisions you need to make to realize ROI on AI.

Episode notes:

If you’re curious, read the OpenAI blog post and paper your self.

For these of you in search of inspiration, try Werner Vogel’s keynote from re:Invent 2025.

MongoDB supplies a versatile and dynamic database that excels with AI information.

Join with Pete on LinkedIn.

Congrats to Populist badge winner Scheff’s Cat for dropping a banger of a solution on error: non-const static data member must be initialized out of line.

TRANSCRIPT

[Intro Music]

Ryan Donovan: Hi there everybody, and welcome to the Stack Overflow Podcast, a spot to speak all issues software program and know-how. I am your humble host, Ryan Donvan, and in the present day we now have a podcast sponsored by the advantageous people at MongoDB speaking concerning the race to show out the agentic worth. So, my visitor in the present day is MongoDB Subject CTO, Pete Johnson. Welcome to the present, Pete.

Pete Johnson: Hello, Ryan. Thanks a lot for having me.

Ryan Donovan: In fact. So, earlier than we get into speaking about this OpenAI paper, inform us slightly bit about your self. How did you get into software program and know-how?

Pete Johnson: I wrote my first line of code as a sixth grader in 1981.

Ryan Donovan: Wow.

Pete Johnson: And I am a kind of fortunate folks that was in a position to flip a childhood interest right into a, now, what’s it, 31-plus-year-career after faculty. So, I do know that is a standard story for lots of people, however I requested for an Intelevision for Christmas of 1981, and if you understand, you understand.

Ryan Donovan: Yep.

Pete Johnson: I as an alternative acquired a TRS-80 coloration pc, the 4K model, not the 16K. That got here with a variant of the Microsoft fundamental interpreter referred to as Shade Fundamental on the time, and I used it to generate slightly program that tracked rebounding and scoring stats from my sixth-grade basketball crew.

Ryan Donovan: Good. I feel I additionally bought the previous switcheroo with the Intelevision Commodore 64.

Pete Johnson: Properly, C 64, you had an actual disk drive. I had cassette tapes and storage on the TRS-80 coloration.

Ryan Donovan: Proper.

Pete Johnson: Or the cocoa, as folks referred to as it again then.

Ryan Donovan: So, clearly it has been a protracted journey from then. You have turned a interest right into a profession.

Pete Johnson: Yeah, I did 20 years at HP. I did 17 of that in HPIT, the place I wrote my first net utility, went into manufacturing in January in 96. That was about 13 months after the primary W3C assembly. I turned HP.com Chief Architect on the finish of that HPIT tenure, after which I used to be one of many founding members of HP Cloud Providers, which was HP’s try and attempt to compete immediately with AWS on prime of OpenStack. And whereas that did not work out for the corporate, that positive labored out for me personally. I moved out of engineering and into gross sales and advertising and marketing and went on {couples} of various startups. One was acquired by Cisco. A little bit little bit of the stint the place I used to be previous to MongoDB, I used to be a subject CTO on the companies arm of CDW, after which I have been right here since June.

Ryan Donovan: All proper. Properly, loads has modified because the previous TRS 80 days. As we speak everyone’s speaking about AI and brokers and, you understand, as folks attempt to get this to have actual world influence—I feel I noticed the stat that 95% of initiatives fail. Individuals are how, you understand, what is the ROI of this? And OpenAI had an attention-grabbing paper speaking concerning the form of GDP influence, how they may consider that influence of brokers and agentic duties. Are you able to inform us slightly bit extra about this paper?

Pete Johnson: Yeah, positive. So, that paper, the GDP vow paper. So, there was a weblog article, there was a white paper, after which there was a dataset. And I am the form of man that I am going to learn every thing to form of see the place the goodness or the place the hiding stuff could be, ‘trigger there’s at all times some hiding that goes on in white papers. And for those who simply take a look at the weblog article, what that’ll let you know is that they checked out 44 occupations throughout totally different vertical sectors of the economic system. They then went and employed consultants with not less than 14 years expertise in every a kind of occupations, and so they had these folks outline 30 widespread duties to every of these occupations. They then took a subset of these, 5 per occupation, and ran it by means of a model of a touring take a look at, the place what they did was they did a one-shot immediate to attempt to full the duty and sped that to an LLM. After which they discovered an individual with an honest quantity of expertise in that occupation and requested them to finish the identical job. Then they’d an unbiased third get together, a human being, then consider which one was higher. After which they established form of a win charge between the human being and totally different LLMs. And to their credit score, OpenAI did not simply take a look at OpenAI LLMs; they examined a few of their opponents, as nicely. That was the fundamental construction of the testing that they ran that was the results of that white paper.

Ryan Donovan: Proper. And such as you stated, you went in by means of all three phases of this paper all the way down to the dataset. What was the form of attention-grabbing takeaway? What’s the stuff that’s form of hidden there?

Pete Johnson: Properly, for those who simply take a look at the weblog article, form of the glory graphic that was a part of the weblog article confirmed what the scores had been for every of the totally different particular person LLMs. And I’ve bought some notes right here. I am going to learn ’em off right here actual fast. For instance, on the time what they had been testing was issues like Claude Opus 4.1 did the perfect, the place it bought a rating of 47.6, which meant that both received or tied within the totally different—in keeping with the Human Evaluator—the totally different duties that it was graded towards. GPT-4.0 was the lowest-scoring of the seven that they examined, and that was 12.4. And so, the best way that they did the weblog articles, they confirmed GPT-4 at 12.4, Grok 4 at 24.3, Gemini 2.5 Professional at 25.5, o4-Mini-Excessive at 27.903, o3-Excessive at 34.1, and GPT-5-Excessive at 38.8, earlier than Claude Opus at 4.1. And that was, like I stated, form of the glory diagram from the weblog article. However for those who take a look at the white paper, there was an, I assumed, was an much more attention-grabbing diagram, and I am going to let you know, it was on web page seven, it is determine seven.

Ryan Donovan: Proper.

Pete Johnson: And it confirmed, along with the principle testing, additionally they did some evaluation of what occurred when the AI and the folks labored collectively. And that is once they noticed actually huge positive factors. So, they confirmed a price and pace enchancment, and so they did this simply with GPT-5-Excessive of 1 and a half on each pace and value enchancment. And I feel, you understand, the glory statistic was about ‘how shut are we to AGI?’ However I feel actually, after I learn by means of the paper, it turned me into an AGI-skeptic. It made me actually take into consideration how I feel we’re getting into an period the place everyone’s gonna be AI-enhanced and see price and pace enhancements much like what they present in that determine seven.

Ryan Donovan: Yeah. That is one thing I have been listening to too, that the AI with an professional is simply tons higher. And, you understand, having that human within the loop makes the AI itself higher too.

Pete Johnson: Certainly. And so, you cited the MIT examine that confirmed 95% failure charges amongst AI initiatives. And I feel there’s a few the explanation why that’s. Primary, there isn’t any skew for AI. What I feel a variety of executives suppose is, ‘I am gonna make this one product buy and my AI technique can be performed,’ when actually it is much more nuanced than that. So, that is factor primary. After which, factor quantity two is for those who suppose you are going to get AGI and change folks, that is flawed logic, as this GDP valve element reveals. If as an alternative you concentrate on, ‘how can I enhance the productiveness of the folks that I’ve?’ After which, ‘what do I do with these productiveness positive factors?’ That is the place you actually begin to see some traction on this market.

Ryan Donovan: Yeah. So what is that this paper hiding as you look by means of the dataset?

Pete Johnson: For those who go to the dataset, it reveals you the prompts that they used for the assessments. So, what they did was throughout the 44 occupations, they began with 30 duties every. So, a complete of 1,320 duties. Then, they shaved that down and examined 5 per occupation. And it seems I’ve had one of many jobs they examined. So, options architect or gross sales engineers, it is generally recognized was one of many assessments, and it was, ‘this is a diagram of an on-prem three-tier net utility. What would it not take emigrate it to Google?’ And it gave the precise directions, which served because the immediate that you’d feed to your LLM of selection. So I did, I used Claude desktop, I fed it the diagram, I fed it the immediate, and it gave me again this very nice, basically, paper for what a migration plan to GCP would seem like. ‘Trigger that is what the duty requested for.

Ryan Donovan: Proper.

Pete Johnson: However then what? Any person has to current that to a buyer. Somebody has to attempt to acquire the belief. So, why is it it is best to use me? For those who get to a state of affairs the place each gross sales engineer representing each consultancy can generate the very same doc, what would your choice standards be? So, there’s some humanity that is nonetheless a part of these duties that you simply nonetheless want, and like I stated earlier than, I feel whenever you take a look at these excessive failure charges that the MIT examine confirmed, I feel a variety of it has to do, you understand, with first that ‘no skew for AI’ factor; but in addition, if you concentrate on it when it comes to changing folks, that is the unsuitable solution to go. It is how will you improve that? And in the end, what which means is: how will you inject a few of your proprietary content material into one in all these LLMs with out having to undergo an costly coaching cycle? That is in the end what that boils all the way down to.

Ryan Donovan: Simply now, it makes me consider this form of preliminary push towards open supply and what folks understand whenever you open supply every thing, it is not the software program that’s the particular sauce, proper? It is the enterprise, it is the folks, it is every thing round it.

Pete Johnson: It is the folks. That is precisely what it’s. It is the folks. And I feel, you understand, as you and I had been chatting earlier than we began recording, we had been each at AWS reinvent final week, and that was very a lot the thesis of what I feel is now Warner Vogel’s final keynote that he would give. And I discovered it very inspiring that he principally gave us a roadmap for the right way to be actually good software program engineers on this AI-enabled period.

Ryan Donovan: That could be a nice lead into the remainder of the dialog. How can we get precise worth from ROI? How can we be actually good software program engineers, or no matter different AI-enhanced job we now have?

Pete Johnson: Yeah, so if I take that in two elements, you understand, how can we get good worth out of AI, I feel is a component primary, after which half quantity two is a few of these items that Werner talked about throughout his keynote about how can we be good software program engineers? So, if I can take the primary half first: how can we get worth out of those LLMs? Like I stated a minute in the past, how do you inject your proprietary information into an LLM of your selecting with the intention to get it to customise and remedy for no matter enterprise downside you are making an attempt to unravel? And after I speak to C-Suite people about that ‘no skew for AI’ factor, you understand, what I inform them is, take your downside first, you understand, what are your prime 10, 15 enterprise issues? What 5 do you have got information for? After which, what two or three may you have got metrics for with the intention to decide how issues bought higher? For those who simply spend cash on a skew and you do not know what the earlier than or after is, how are you aware the right way to calculate your ROI? So, you want good information, you want good metrics with the intention to get there. Usually, the best way that we see folks implement that, and the rationale why I joined MongoDB in first place has to do with, in the end that boils all the way down to having an excellent vector search and good embeddings.

Ryan Donovan: Mm-hmm.

Pete Johnson: So, we will speak about that slightly bit extra however that is the way you get worth, is whenever you boil it down, when you have good embeddings, and good vector search, and also you’re making use of that to an issue that you’ve got good information for, and have good metrics, that is the recipe for getting worth out of AI.

Ryan Donovan: Yeah. I feel that was one thing I considered, you understand, studying and writing about, like, how is software program gonna survive within the age of AI? And it is like, it is the information ultimately. And for that information, such as you stated, it is the vector search, the embeddings. So, what is the strategy to getting the perfect form of vector search and embeddings?

Pete Johnson: Properly, that is the place, like I stated, after I joined six months in the past, the form of non-technical motive why I joined is, you understand, I had the possibility to go work for a buddy, and my profession over 31 years tells me that whenever you’ve bought a buddy as your boss, that at all times tends to work out fairly nicely. However the technical motive was, in February, MongoDB made this acquisition of Voyage AI, and whenever you first take a look at that, why would MongoDB purchase an organization that does embeddings? And in the end, it is with the intention to have a greater collectively story and make it simpler for builders to create an excellent vector search, and to do it in a means that will get you higher retrieval scores. Particularly, there’s two options, one pre-acquisition and one post-acquisition. While you, as a developer, need to go and make an embedding and a vector search, there’s usually 5 selections that it’s important to make. As soon as you’ve got chosen your embedding mannequin, it’s important to determine on a similarity rating. You need to determine in your chunk dimension, how huge of the chunks I am gonna put by means of it. What number of dimensions do I would like my array to be? What degree of quantization when it comes to how huge am I gonna retailer 32-bit floating factors, or am I gonna quit some retrieval high quality, however acquire some storage if I do 8-bit ins, or go all the way down to binary? After which, the fifth is whether or not or to not use a re-ranking mannequin. And there is two particularly that I am going to speak about that Voyage does a extremely good job of. In January ‘25, Voyage launched a function referred to as the Matryoshka reasoning. So, contemplate you embed your corpus of information, and you have determined to strive 10×24 dimensions, and that will get you a sure dimension and a sure high quality. What if now I need to strive 5×12? With a conventional embedding mannequin, I must learn my complete corpus of information with 5×12 because the variety of dimensions. However with Matryoshka reasoning, what you are in a position to do is you are taking the embeddings you have already got, and it seems they’re ordered, so that you simply lop off the final 5×12.

Ryan Donovan: Attention-grabbing.

Pete Johnson: And that makes it in order that, as a developer, you’ll be able to iterate by means of your cycles of figuring out storage dimension versus retrieval high quality, what am I making an attempt to get for my particular utility? It decreases the period of time it takes you to undergo that cycle. So, that is an vital means of making an attempt to make it simpler on the developer to make that call. So, that was the primary one. The second that actually grabbed me, which we launched again in July, was one thing referred to as Contextualized Chunks. So, the best way {that a} conventional embedding mannequin, for example you wished to embed the dimensions of a sentence.

Ryan Donovan: Mm-hmm.

Pete Johnson: Properly, a sentence in a single doc may seem in a second doc and have very totally different which means based mostly on the context wherein it seems. So, what folks do historically to beat that’s they’re going to embed a bigger chunk dimension to attempt to seize the context round that sentence. Properly, which means you’ve got bought extra storage as you attempt to enhance your retrieval high quality. And what contextualized chunking does is whenever you ship your, on this case, sentence to be embedded, you additionally ship all the doc, and what we’ll do within the background is we’ll embed the gadgets across the context of the doc with the person sentence, and it really flips it the place you will get higher retrieval high quality with smaller chunk dimension.

Ryan Donovan: Attention-grabbing.

Pete Johnson: Which is totally reverse of what you’d suppose. So, that is one other instance of making an attempt to cut back the friction {that a} developer might need as they’re making an attempt to study these embeddings.

Ryan Donovan: Yeah. I’ve seen additionally for embeddings varied overlapping chunking methods.

Pete Johnson: Sure.

Ryan Donovan: Which looks like, you understand, you may get higher context, however once more, it is growing the storage prices.

Pete Johnson: It’s. In relation to the quantization, the chunk dimension and the scale, it is this fixed battle that the developer is dealing with the place you are making an attempt to stability storage dimension, and it is not simply disk storage it is the dimensions of the index in reminiscence, versus the retrieval high quality. So, what we attempt to do, each with the bottom embedding fashions and along with the vector search that we now have on prime of the bottom MongoDB product, is to attempt to cut back that friction. I had anyone clarify it to me this fashion as soon as, the place lately one in all our executives stated, ‘keep in mind when JavaScript got here out?’ I am sufficiently old to recollect when JavaScript got here out. After which we bought jQuery, and that was means simpler to make use of, and no person’s used uncooked JavaScript anymore. However now we have got, you understand, React and Angular, and nearly no person makes use of jQuery. On this ecosystem of every thing associated to AI, whether or not it is studying the frameworks to construct brokers or to study these embeddings, we’re nonetheless means nearer in our timeline and within the sophistication of the instruments, we’re means nearer to the unique JavaScript than we’re to the React, or the Angular. And so, what we’re making an attempt to do, what MongoDB is making an attempt to do, each with the Voyage acquisition and with the bottom product, is to maneuver us slightly bit nearer to jQuery, as a result of we’re gonna see extra folks develop. Brokers and AI merchandise within the subsequent three years than we now have within the final three years. So, reducing that studying curve and decreasing that friction for the person developer is a extremely huge a part of that.

Ryan Donovan: Yeah. It nearly looks like, you understand, you are speaking about transferring up the abstraction ranges, proper?

Pete Johnson: Completely. That is a giant a part of it.

Ryan Donovan: Mm-hmm. So, with, you understand, with all these trade-offs individuals are with the storage facet, decreasing the index in reminiscence, all the opposite trade-offs, how can a developer form of strategy making these selections? Are there methods of eager about it which you could provide?

Pete Johnson: Yeah, so, it boils all the way down to these 5 selections that I talked about earlier than. As soon as you’ve got chosen your embedding mannequin, and we attempt to make these core 5 selections simpler on the developer in order that they’ll spend extra of their time engaged on their core enterprise logic and fewer time worrying concerning the mechanics of the embedding. So, like I stated, usually with regards to the quantization, the variety of dimensions and the chunk dimension, these are the core three of these 5 selections the place you are making that stability between the 2. Usually, what we suggest, similarity rating or begin with Co-sign. There is a couple others that individuals usually use. Co-sign finally ends up being an excellent starter similarity rating. In relation to chunk dimension, for those who use the contextualized chunking that the Voyage provides, you’ll be able to go to 64K tokens and get a lot better retrieval scores than you’ll be able to whenever you go together with greater chunks. So, you’ll be able to form of ignore the overlap lapping chunk dimension for those who use the contextualized chunking fashions. In relation to dimensions, begin with 10×24, and once more, as a result of we have got the Matryoshka reasoning in there, it is easy to strive 5×12. It is simple to scale it down. To see if you will get higher retrievals to get a suitable retrieval rating at a smaller storage–

Ryan Donovan: Proper. It is simpler to scale down than up.

Pete Johnson: It is simpler to scale down than up. Precisely. In relation to quantization, whenever you go to construct the indexing, the best way that the MongoDB Vector search API works, you simply get to pick what degree of quantization you need to make use of. So, by default, you should utilize the complete 32-bit. Once more, you’ll be able to experiment with a unique, utilizing the 8-bit in to see for those who get nonetheless a suitable retrieval rating, however a decrease storage dimension. After which we have really discovered that the re-ranking may help you fairly a bit, as nicely. For those who mix the advantages you will get particularly out of the contextualized chunking with a best-in-class re-ranker, we discover you’ll be able to enhance the retrieval rating someplace within the neighborhood of 10% to fifteen%, which that may be the distinction between a hallucination and providing anyone an AI-enhanced resolution that truly helps them remedy an actual world human downside.

Ryan Donovan: Yeah. And also you additionally talked about holding the database index in reminiscence. I do know once we did our cloud transformation, we needed to get specialised storage containers simply because we wanted a lot in reminiscence, proper? As a substitute of compute. Are there methods to make that trade-off to both cut back the index dimension for prices, or for those who’re going for pace and efficiency, to extend that index dimension?

Pete Johnson: Yeah, usually there is a correspondence between the dimensions of the index and the pace that you simply get out of it, and that conceptually is sensible that for those who’ve bought an even bigger indexing house to attempt to search throughout, then your efficiency goes to, you understand, have the same enhance. And once more, it relies upon loads in your particular information. It relies upon loads on what you are working it on. However the vector search is simply one thing we provide as a part of the Atlas merchandise, which, for those who hearken to our most up-to-date analyst name, Atlas—the cloud model of our product that runs on each single hyperscaler information heart, so that you get to choose the place it will get deployed—will routinely handle that occasion for you. The vector search is a part of that product.

Ryan Donovan: I do know we talked within the name about Mongo being form of a distinct segment product. Do you wanna form of deal with that?

Pete Johnson: Yeah, I imply, after I speak to prospects about this, due to how far again relational databases go– so, I occurred to have been born the identical 12 months that the white paper that gave delivery to relational databases was written. So, that was in 1970. If you concentrate on what the world was like in 1970, the sorts of functions had been oriented in direction of departments, not the general public at giant. You could possibly have downtime on the weekends, and storage was actually costly. And due to that, the training system that all of us undergo actually tends to place an emphasis on normalization of information. So, how will you lay your information out so that you simply’re storing absolutely the minimal quantity of information? And what our founders noticed– so our founders bought DoubleClick to Google, and that is a part of what the advert system that you simply see on Google searches based mostly on. What they noticed was that there was some extra fashionable use circumstances that perhaps it was okay to not absolutely normalize if what you get is the benefit of higher transactional response. So, the primary MongoDB commit was in 2007. So, that was after web. That was after cell. That was after Cloud. So, by being conscious of that and having a extra versatile schema construction, you may know, MongoDB is basically based mostly on this JSON mannequin. We retailer the information in a binary model of JSON, referred to as BSON. That may get you far sooner transactional response, that form of factor you want in an AI utility, versus say one thing that is analytical, the place perhaps you’ve got bought extra information, and also you do have to fret about normalization. For those who de-normalize a few of that information with MongoDB, you will get higher transactional response. And as an alternative of simply pondering, ‘I have to normalize in any respect prices,’ nicely, for those who’re keen to de-normalize slightly bit, then what you get, the trade-off is you get higher transactional throughput and higher transactional response time. Does that match each workload? No. Does it match a ton of workloads which can be tremendous vital? Sure, as a result of the trendy utility, you’ll be able to’t have downtime on the weekend like you might in 1970, proper? Like, sluggish is the brand new downtime.

Ryan Donovan: Proper.

Pete Johnson: So, there’s loads of use circumstances that match that extra de-normalized mannequin that we offer.

Ryan Donovan: Yeah. And you understand, clearly JSON is without doubt one of the foundational applied sciences of the present web, proper? Everyone’s bought JSON.

Pete Johnson: And AI, because it seems.

Ryan Donovan: And AI, because it seems.

Pete Johnson: , the one different factor was if you have not watched the Werner keynote, I’d suggest. It is a good use of an hour and quarter-hour. The ‘too lengthy do not learn’ model, if I am going from my notes, he talks concerning the significance of remaining curious, of being an excellent communicator, simply since you may use AI-enhanced tooling to generate your code you continue to personal it. You are still answerable for it working in manufacturing. It is not an excuse to say, ‘nicely, my AI generated it.’ No. Like, you personal it. And he talked about some strategies for ensuring that you simply examine that code and put your seal of approval on it. After which he talked concerning the significance of pondering in techniques. As a result of AI is gonna be actually good at serving to you with particular person duties, and also you because the human must see throughout these duties, and why every one is critical. And that blended into his closing factor, which he used this phrase that not many individuals recognized referred to as the Polymath, the place what which means is you are an professional in a single, deep matter, however that you understand slightly bit about a variety of different issues. So, like that t-shaped engineer that you simply might need heard of–

Ryan Donovan: mm-hmm.

Pete Johnson: As a substitute of that phrase polymath, that for those who mix these 5 issues, that is what he thinks we’re about to see this renaissance of software program improvement based mostly on being AI-enhanced. And that is what this form of Vogel’s renaissance developer is, for those who embrace curiosity, communication, possession, pondering throughout techniques, and being a polymath.

Ryan Donovan: Yeah.

Pete Johnson: It is value your time. I discovered it tremendous inspirational. I need to go construct stuff.

Ryan Donovan: Yeah. Properly, when it comes to the possession, I learn an article some time again that stated like, ‘ are you able to belief AI code?’ Properly, no. ‘Are you able to belief junior developer code?’ No. ‘Are you able to belief code you wrote yesterday?’ No. Like, ensure you take a look at and perceive any piece of code that comes throughout your desk.

Pete Johnson: Completely. The distinction is that we acquire an understanding and construct that belief in a conventional means as a result of we write the code. So, as we’re writing it we belief what we wrote. That does not imply that you do not have– you continue to want the assessment cycle for those who’ve bought AI producing a few of that for you. So, it is a shift in pondering. I imply, like I stated, I am 55 years previous. I have been writing code since I used to be 11. I have not written a handbook line of code in eight months now.

Ryan Donovan: Wow. Go watch the keynote, get impressed and begin constructing.

[Music]

Ryan Donovan: Okay. Properly, it’s that point of the present once more the place we shout out anyone who got here on the Stack Overflow, dropped some data, shared some curiosity, earned themselves a badge. As we speak, we’re shouting out a Populous Badge winner – anyone who dropped a solution that was so good, it outscored the accepted reply. So, congrats to Scheff’s Cat for answering ‘error: non-const static information member have to be initialized out of line.’ Interested by that error, we’ll have the reply for you within the present notes. I am Ryan Donovan. I edit the weblog, host the podcast right here at Stack Overflow. When you’ve got feedback, questions, considerations, electronic mail me at podcast@stackoverflow.com, and for those who wanna attain out to me immediately, you could find me on LinkedIn.

Pete Johnson: My title’s Pete Johnson. I am the sphere TTO of AI at MongoDB. You’ll find me on LinkedIn, the place I learn all of the white papers, so you do not have to. I am going to actually join with and have open DMs with anybody, so be at liberty to affix in, and I am going to do this analysis in order that you do not have to.

Ryan Donovan: All proper. Thanks for listening everybody, and we’ll speak to you subsequent time.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments