Skip to content

All About AI: Building Alexa and Beyond, An Insider’s View on the AI Revolution

Hosted by Aaron Burnett with Special Guest Al Lindsay

For the second installment of Digital Clinic’s “All About AI” series, we dive into the secretive world behind the development of Amazon Alexa with Al Lindsay, the third person hired into the Alexa group and former VP of Alexa Engine Software at Amazon. From the early clandestine data collection efforts dubbed ‘Amped,’ to the challenges of far-field speech recognition, Al provides a detailed account of Alexa’s journey from concept to deployment.   

Al also discusses the current state of AI, its potential impacts, and how businesses can effectively leverage AI to gain a competitive edge. Enjoy this rare insight into one of the tech world’s most successful projects.  

The Evolution of AI and Machine Learning


Al: As we were building the team and I was hiring scientists and science leaders, we were very secretive about what we were doing. It was actually one of the best kept secrets at Amazon. No one knew what we were building until we actually shipped. And more than once our data collection sites got busted. Well, they were visited. They weren’t busted, and even through those, nobody knew it was Amazon.  

Aaron: Well, that’s Al Lindsay, who, as Vice President of Alexa Engine Software at Amazon, led the technology and science teams on Alexa from inception through the launch of Echo. As Alexa massively scaled, Al remained responsible for the underlying Alexa Engine as it rapidly grew to support an ever increasing ecosystem of new devices and applications, ultimately numbering more than 100 million Amazon Echo devices around the world.

Oh, and prior to that, Al was responsible for all the technology associated with Amazon Prime. Al has operated at the frontiers of machine learning, natural language processing, and artificial intelligence, and he’s done so while hyperscaling and maintaining utility-like levels of reliability. His perspective on the opportunities, risks, and potential impacts of AI are informed by experiences few of us will have.

Aaron: Can you orient us where we are on the continuum of AI, from narrow AI, to general AI, to super intelligence. Where are we on that continuum? What are we looking at now in particular with ChatGPT and Claude?  

Al: It’s interesting. I think AI, if you want to go back historically to where it began, you could talk about, Babbage machines, or you could look at Alan Turing’s work, Logic, expert systems, which is, Deep Blue, I think, is really an example of an expert system. 

It was expert at playing chess and all of the variations and moves and was trained on the expert moves of the grandmasters and was able to apply that. You think about the evolution to where we are today. Now, instead of Logic, programmed rules, constraints, and expert approaches, it is more general. 

I know this isn’t what you meant by general AI, but it’s more generally now about models trained with data that’s ground truth and annotated so that the patterns in that data can be used to drive understanding of your surroundings. So in speech recognition, that’s recognizing words based on adjacent words or sounds and phonemes. 

Originally in speech recognition, actually text to speech is maybe a better example. Text to speech, the more robotic sounding voices, were generated using recordings of the basic phonemes of speech, and then you splice those together to create the words on the fly. Then you smooth it, so it doesn’t have the robotic sounds to it. Whereas today it’s all, it’s almost all generated from AI models, I guess you would call it now in today’s parlance. And so it’s completely evolved from more human crafted ways of doing it to just let’s listen to billions of hours of people speaking various different ways and use that to train a model to be able to replicate that type of speech, the speech pattern, and look at sort of the adjacencies of words, predict what the next word might be, and give it that natural sound. 

That evolution in speech has happened. The 90s was when speech recognition started to be used more commercially and was somewhat functional in systems like Directory Assistance 411. It could recognize the name of a city or fairly constrained vocabulary or a yes no response. 

I actually worked on those systems at Nortel back in the early 90s. The evolution from there to, say, Alexa, the technology took leaps and bounds. And you can call it AI. It’s really machine learning methodologies to train models from data.  

Aaron: Yeah.  

Al: And the biggest challenges for Alexa were where to get data that doesn’t exist that you can’t buy because it was far field across the room, and there isn’t any such corpus of data that was available to us back then.  

Developing Amazon Alexa

Aaron: And the Alexa team took a pretty novel approach to getting that data. Can you tell me about that?  

Al: The Amped data collection. So basically, your speech recognition is going to be as good as your training data, as is the case on most things AI. The training data that was available to us for speech recognition was what we would call close talk, so recorded on a microphone that was maybe six inches from the mouth. Think about talking to Siri on your phone. You got to hold it right next to your face, or a microphone like the one that’s in front of me. 

There were no training sets ever created of far field speech recognition. As we were building the team and I was hiring scientists and science leaders, we were very secretive about what we were doing. It was actually one of the best kept secrets at Amazon. No one knew what we were building until we actually shipped. 

I don’t think that ever happened before or after. Everyone knew we were building the Kindle, or Fire TV, or whatever. I would be trying to convince some senior speech recognition scientists to come and join us, and all they could imagine is we’re trying to do shopping using voice and via an app and it’s not very interesting. 

Then once I finally convinced them and they came on board, on their first day, I would do the reveal and I’d say, “All right, here’s what we’re building. Here’s the vision for Alexa, like the ambient computer in the room, the Star Trek computer. We’re going to talk to it from across the room.” 

I think to a person, they all had the same reaction. “Oh that’ll never work.” Everybody knows far field speech recognition doesn’t work. Why? Because, something about the inverse proportionality of distance and signal to noise ratio. Basically, the further you get, it exponentially drops off, the signal to noise ratio. If you’re two feet away versus three feet or four foot, it’s just exponential change in signal that we just lose it in the ambient noise. The answer was to train models on data that matched that environment. So have your speaker 10 feet away with the microwave running and the TV on and water running in the kitchen to create the exact matched training data, annotated, so that you could train models that worked in that same environment. There was no training set for that. The creative solution was to, at massive scale, roll out to condos and houses all over America to capture unique dialects for the first language, which was US English, and set up in these houses microphone arrays that kind of represented what Alexa was using to gather your voice, and then bring people in and pay them for an hour of their time to come in and do something, but basically generate this speech for us. We couldn’t have them read a script. What you want to do is prompt them and say, “What’s something you would add to a shopping list or to a to do list?” and try to get the variation of things they might say in addition to how they would say it and then record it across the room to tons of different microarrays all over the room and literally terabytes of data every day in each of these rooms being recorded and uploaded to the cloud, so we could process it and use it to train models and all of it being automatically annotated as we went. We built all the software systems to do it and then rolling these trucks across the country. 

Crazy stories of that because when you come into a condo neighborhood or townhouses and you set up and then there’s this constant parade of people coming through your front door, neighbors get suspicious and the police show up wondering if there’s some nefarious activity taking place. 

More than once, our data collection sites got busted. Well, they were visited. They weren’t busted, and even through those, nobody knew it was Amazon, and nobody knew what we were doing.  

The device microphone arrays were hidden [00:08:00] under transparent clause, so you couldn’t see what was behind the curtain while we were doing it. 

People were being prompted in generic ways that they didn’t really know what they were doing. They were just getting their $20 for an hour or whatever it was, in and out. But in the span of about a year, we were able to very quickly get to critical mass and data to get to what scientists refer to as the ‘knee in the curve’ of the word error rate, where you see gains, and then they start to slowly taper off. We got to an acceptable word error rate through that data collection that allowed us to launch the original invite-only plan, so that we could then carefully bring in 10,000 more people, have them use something that worked well enough that it was functional and they wouldn’t be frustrated, and then add that data to the mix and move forward. 

So it’s really amped data collection, pretty creative, kind of expensive, a little bit off the wall. But it was solving the cold start problem.  

Aaron: And I read that it cut down your training time from a projected 20 years to that one year.  

Al: I don’t think we really knew how long it would take, but that’s probably a fair estimate. 

The AI Continuum

Aaron: Yeah. When we go back to where we are in this continuum, narrow to artificial general intelligence and super intelligence. When I think of what exists in AI today, it still seems fundamentally that what’s being served is derivative. It’s not novel. Is it right to think of that as advanced narrow intelligence or are we truly at a point of artificial general intelligence? 

Al: So I think the definition of either one of those terms is unclear and also a moving target. So if I was to say to you, “What is general intelligence for an artificial being?” and we were in the 40s, you’d probably be happy if it beat the Turing test, be like, yeah that’s general intelligence right there. If it can trick a human into thinking it’s a human, then we’re done. No one would accept that today, I think, as that definition.  

If you move forward through time, it’s been constantly revised. And so today I might show you a bot that can book your vacation, and you actually can’t tell that it’s not a human, but it’s a specialist in one area, say travel. 

You might say that’s not really general intelligence. It’s narrowly focused, or it doesn’t count because it was trained using data from et cetera, or everything it does is as derivative work based on what it’s read and is trained on. I think the definition matters because the definition’s unclear and it’s constantly moving. 

So depending on what your definition is, for some definition, we have achieved it. For more ambitious definitions, I think the learning part is what’s missing for me. The models today are trained on immense amounts of data, enabled by the advent of massive compute power, massive memory, these things being affordable, the ability to crunch all those numbers, the invention of the GPU, for gaming purposes, which made all this linear algebra easy to perform. A lot of things have come together to get to where we are now, but the systems aren’t learning in real time to the degree that I would think of being conscious being. So if you’re worried about Skynet and Terminator becoming self-aware, I don’t think we’re anywhere near that. 

So if that’s your definition, I feel that’s a long ways off.  

Aaron: Sure. That makes sense. And of course, that’s consistent with say, ChatGPT or Claude training a new model and taking months to do it and then releasing that new model. And that’s what you have. There’s no active learning going on there.  

Al: They seem to be plugging a lot of those gaps. You’ve got a massive model that was trained on, trillions of parameters and it took three months to train and everything it knows is already three months stale, but then you can plug in live APIs, and you can recognize that you need to go and grab more. And Alexa did this from the start. If you wanted to know who was winning the basketball game right now, we needed to go out to stats.com and find out in real time the answer. And so the LLMs, the first trick was, “Hey, look, we can ingest all of the knowledge in the world and look what we can do with it.” And then the next set of tricks is now, how can we start to make that more current zeitgeisty to understand current events get more real time, and that starts to become the feedback loop.  

I think right now that’s happening. We’re starting to see more and more of that. But that becoming really round-trip learning experiences that are going back into the core model in real time versus being set aside for the next training cycle is where it really becomes learning. 

AI Best Practices

Aaron: What do you think, if anything, is commonly misunderstood about AI?  

Al: By whom? I think I think it matters. There’s the general public, but there’s also the business and industry.  

Aaron: I think business and industry 

Al: I haven’t seen very many successful attempts to incorporate the amazing advances that are taking place into the average business. 

High tech companies are good at it. Your Amazons, your big techs are going to figure, going to run fast, be able to see further, look around corners, but for insurance companies, and construction firms, and take your pick, everybody has it in their strategic plan to figure out how to leverage AI to be more efficient, to create durable differentiation versus their competitors, to win new markets, to do more with less, but none of them know how to do it or where to begin, and I think a big part of that is not understanding what it even is and what its shortcomings are and how to overcome them. So a lot of companies don’t even begin because they heard the word hallucination and then that scared them enough. We don’t want our brand to be diluted by some random thing that this AI might say, and we can’t control it, and therefore we’re out. 

Some expertise is needed in companies in order to figure out how to leverage technology in a meaningful way. And a lot of people, they bolted on, “Yeah, we’re going to hire a team of folks, and we’ll set them over here on the side, and we’ll tell them ‘Your mission is to do innovative things with AI,’” when it really needs to be working backwards from the core business. 

What are the problems you’re trying to solve for your customer, for your business? Now let’s look at how those problems might be enabled with AI or technology. So there’s usually that gap. This is something we can bolt on.  

Aaron: Sure. Is there a consistent theme to the vision for AI that’s being missed? 

Al: By those who are creating it or those who are using it?  

Aaron: Those who are using it. Keeping with those in industry and business who should be using AI in different ways or more expansive ways than they are.  

Al: Most businesses could benefit from it, but each one of those cases is let’s open up and take a look at what you do every day, what your challenges are, and then where it’s ready to help you and where it maybe isn’t quite ready, where you might need a data strategy to get your own data into the right format state, under the right governance control, so that you can use it, your customer data. So that you can then apply those tools, AI to aiding your business. Data is the biggest part of that. 

Aaron: Yeah, I was just going to follow up on that. When I talk with other people who are beginning to use AI, it tends to be focused on novelty task accomplishment that isn’t reliant on high fidelity or carefully curated data. It’s not part of a larger data strategy, it’s a way to write copy or a way to analyze something in C2 based on a very limited data set, a file rather than a data tranche.  

You had not just a front row seat, you were at the center of development of Alexa at Amazon. You were VP of Alexa Engine Software from 2011 to 2019, third employee? 

Al: Yeah. So my role changed over time, I’d say. I was third employee. Greg Hart was Jeff’s shadow, his understudy for the 18 months leading up to us starting to build this. He grabbed me to build and lead all things technological. He led the product team and then the other person who was there before me, other than Greg, is Frédéric Deramat, who’s still there, and Frédéric’s a distinguished engineer, who is the leading mastermind behind a lot of the engineering innovations that took place, whether it’s, creating a skill, the Alexa voice service embedding Alexa in other devices, just a lot of the underlying structure of the architecture of everything that makes Alexa work was Frédéric’s brainchild. 

So Frédéric was my Chief Engineer, and then there were many scientists as well. So I came in 2011, with the mandate. I had the vision. There was the six pager that had been written and kicked around before Greg was handpicked to lead this. Yeah. My job was to figure out how to build it in something crazy, like a year, year and a half. Now, the fact that we did it in three years to me still blows my mind because we had to solve far field speech recognition, we had to come up with these high-quality voices, we had to get it all into a device.  

The latency obviously was a really difficult challenge. Getting that responsiveness down under one and a half seconds, just so that felt like a human interaction. You didn’t have long lags and a lot of latency, and then all the functionality of whether it was music or to-dos or shopping lists or the domains, if you will, or the applications, there’s a massive expanse of things we could have done. 

Many of which were enabled with Amazon properties like Amazon Music and Audio Books, et cetera. There was a lot to do.  

So we did acquisition; we grew quickly. We built new office sites so we could harvest top talent and speech field. And we set ourselves up close to universities that had expertise and started building relationships with key people in academia, so that we could create a pipeline of speech scientists. 

There were a lot of sort of parallel things going on to make it happen. But yeah, I was there in the role of leading all tech and science from 2011 to 2016. Between 2014 and 2016, we were surprised ourselves. We’re not arrogant to believe that we were going to be so successful as we were in that time. 

We went from 0 to over 100 million endpoints, and just the type of scale that we were seeing in terms of traffic, applications, numbers of devices, numbers of languages, team size. I took the team from zero to 400, and we launched. By 2016, we were in the thousands, and by 2019, when I left, it was 15,000 in Alexa. 

So my role started to change in 2016, when it became obvious that two VPs wasn’t enough, we were going to need 20. They needed to be able to run their own large slices of the business. I started to take large portions of things I was responsible for and give them to other capable leaders to scale them, so that I didn’t become the scaling bottleneck.  

Hyperscaling: Lessons from Amazon

Aaron: Being in an environment like that, where you had to scale to a size that is if not unique, then unusual globally, gives you a highly differentiated perspective now, as you look at technology and you look at systems and you look at opportunities. How do you think you look at things differently today than others might because of that experience? 

Al: The template for hyperscaling looks different depending on what you’re scaling. We all hope that we’re able to look around corners, but we all make decisions every day that are maybe accumulating technical debt or scaling debt or deciding to do things a certain way for a business reason, to move quick today and try to get signal, versus building and architecting design, designing products or software stacks deliberately for hyperscale. 

I don’t think anyone should ever, from the onset, build systems that are meant to be operated at hyperscale when they’re unsure of their own success. Everything’s an experiment and an iteration. So there are one way doors and two way doors to steal from Jeff’s terminology. If it’s a two way door, run through it, meaning, you can come back and change it, but if running through it allows you to move your business forward and test your market, get your signal, find your product market fit, know whether you’re on the right path, get it in the hands of real customers and get feedback, so you’re not an echo chamber or navel gazing, as I call it, only paying attention to your own ideas and not listening to customers. The faster you can get to those things, the better.  

Hyperscaling is about decisions every day and being thoughtful about those that are one way doors and making sure you don’t make decisions that’ll prevent you or make it extremely difficult for you to take the next step, should you have that success. 

I think that looking around corners and thinking about the true one way doors, the experience definitely helped show me some of those that became really hard challenges and finally tune the pattern matching around what are the things to worry about now versus what can wait. 

Aaron: You’re an investor and an advisor now. Do you find that when you’re engaging with clients, with potential investments, that because of the experience you had at Amazon and with Alexa, you tend to view things different? Different patterns, risks and opportunities that might not be so evident to others who haven’t had the same sort of experience? 

Al: I like to think so, but again, don’t want to be too arrogant about it. I think everybody at the table in a room when you’re looking at something brings different experiences to bear. And this is one flavor of that. That can be accretive to the other opinions in the room, to help guide. But yeah, that’s the value I try to bring to the folks that I work with, is the experience, not just of the hyper growth, but also operating at scale and operating at scale in a very high quality availability performance environment. 

If you’re selling iPhones, they better work every time you pull them out of your pocket. If you’re a 30 person startup, and you have a new game or a social media site that you’ve launched and it goes down for an hour or 30 minutes here or a day there, it’ll be tolerated. But in our environment, that wasn’t tolerable. So you had to ride the rocket ship of scale and growth, but do it while keeping the lights on consistently. If the lights ever went off, it was an all hands on deck.  

The Original Vision for Alexa

Aaron: I’ve seen a picture of what purports to be the original whiteboard drawing for Alexa, which made me curious to ask, to what extent is that what people experience with Alexa today? To what extent was that fully formed in that original six pager and the original discussions?  

Al: There’s two separate things, the vision and then the manifestation of the vision. It’s a journey, and they’re still very early in the journey towards the vision. If the vision is ambient computing, Star Trek computer, it’s still very much task oriented. It’s getting more conversational. They’re making headway, but there’s still quite a gap. So, I’d say the vision hasn’t wavered. I think Jeff’s vision of ambient computing, the device disappear, you shouldn’t even know that it’s there, just like the Star Trek computer. Did you ever see a microphone hanging from the ceiling? No, it’s just ambient. It’s there. It’s always available. It’s there. You don’t have to turn your head to talk to it, and it’s going to work and respond and do what you need it to do. And so that part of the vision I don’t think ever wavered, and using that as a stake in the ground, 

It drove us to solve the hard problems first, like far field speech recognition, because it would have been easier to say that’s the vision, but why don’t we start with Siri just launched, because they launched literally three months after we started this initiative, and we all went, “Oh,” but there’s a huge difference in the magic between close talk and the far field ambient computing device. 

It just reinforced us to double down on that differentiation, which is, “It’s just there. Talk to the room.” You don’t need to pick up the device and hold it next to your face, and it’s low friction. I think another good example would be all the streaming music devices. 

People would say why do I need another music device? That’s all Alexa really is. And I was a big, I still am a big user of Sonos products, and we actually work closely with them. But in the beginning, when Sonos didn’t work with Alexa, I found myself not using my high fidelity as much because to take a phone out of your pocket, unlock it, navigate to an app, open it, count to five, latency everywhere, choose a destination set of speakers, choose some music, press play. 20, 30, 40 seconds just went by. I might have walked in the house with my arms full of groceries. I can just turn to my device and say, “Alexa, play some music,” and it starts playing and it might not be the same fidelity, but that friction is real.  

Like touching glass, a lot of people’s first response to Alexa was “I can do all that with my phone. Why do I need another device?” Let’s take a look at some of the friction use cases, use it for a while. And it’s not for everyone, but in a lot of people who ended up loving the product, I found it was that removal of friction that kept them coming back. 

It’s so easy. It just works getting my phone. It’s a hassle. I always have to have it with me. And, there’s all that latency in those interactions. So that was a differentiator. Just to bring that back, I think all of that, the ambient, the ease of use, just talk across the room and accomplish what you want just using your voice, it’s like speaking to a trusted friend. All of that was the vision.  

And that didn’t change from the original six pager, everything else did, right? What apps do we launch with? What are the most important capabilities of the gate and what capabilities need to come later? And all of that was horse trading and what’s achievable and what assets do we have that we can leverage and what we need to go out and get. 

And I’d say, then there’s a meandering path. We rewrote that six pager half a dozen times in the first year that I can remember. And then separate six pagers for the developer programs and how do we create an ecosystem where developers will come and build for the platform? Because if everything Alexa ever does is only ever built by us, it’ll never be smart enough.  

Aaron: Yeah. Out of curiosity, what is the litmus test for a decision or opportunity that requires a six pager?  

Al: There’s probably half a dozen different ways to answer that. Some might say you’re looking for funding or headcount that you don’t already have. 

And so you want some power that be to make an investment so that you can go and build it. That’s how Fire TV came to be. A TPM on Fire Phone had the idea, wrote the six pager, and then next thing you know, there was an entire program built up around it and a very successful large business. 

It could just be exploring ideas and like the flinging mud at the wall thought because the six pager is less about the document and more about the process of thinking and doing your research and looking at alternatives and weighing them and then coming back and trying to succinctly sum up, “Hey, we had an idea to do X.” 

And so we went and we explored, here’s what the competition’s doing. Here’s six different ways we could do it. The first three, we dismissed out of hand for this reason, the other two have merit. We could go this way, we could go that, so it’s really about the thought process of exploring an idea. 

Yeah. And that could apply to anything, small or large, but generally, if you wanted to break ground on some new product or idea or something that’s customer visible, working backwards from that customer experience, you’d write a six pager.  

Aaron: Okay. Amazon is famous slash infamous depending upon who you’re talking to for having a very intense, very demanding work culture. I would imagine you were in one of the most demanding pockets of Amazon for a number of years. What enabled you to be successful in such a demanding culture?  

Al: It was we often refer to it as being close to the sun, because this is now at the phase where Jeff Wilkie is running Amazon retail. 

As the CEO of Amazon, Jeff Bezos is focused on AWS and forward looking. Jeff would often say he lives in the future. Like he’d need to look at you and say, your responsibility is next month and three months from now, I’m thinking about two years from now. That’s our sphere of focus. 

So in living in the future, that meant something like Alexa, which he viewed as the next pillar. We had a lot of his attention, like spending time with him weekly on various six pages and progress from development up to when we launched and onwards, and so it was intense. He’s very demanding. 

You need to be those meetings when I only had one or two a year with say Amazon Prime or DV rentals. You put a lot of work into preparing for it. And then you come out of there with a lot of information and learnings and go back and revisit and might go in different directions often. 

Jeff really is probably the smartest person I’ve ever met and has the amazing ability to have insights into something you’ve been looking at for months with a team of experts and the six pagers meant to help enable this behavior. He’s able to spend 20 minutes. Coming back up to speed on whatever’s going on in your space, and then his ability to have uncanny insights that we all missed. 

How did we miss that? Or we didn’t think of that. He’s able to stitch things together because he sees everything. It’s just like what about Audible? We should bring them in and here’s how we could create a unique offering for content creation on a device. He’s got the broader, big picture, people spend a week preparing for a meeting with him and get burnt out doing it twice or three times a year. 

We were doing it two or three times a week, and it was intense. What made it possible? Just a phenomenal group of individuals and experts all around us. Whether they were scientists or product managers. It just a great leading cast of folks who are involved. 

If anyone ever asks, what enabled us to build Alexa in those three years, every single person that was on that team. Without them, we might not have gotten there. There was no one person who I think can take more than 5% of the credit. It takes a village. 

Current Applications of AI

Aaron: Yeah. That’s amazing. What are you seeing today? Coming back to AI specifically, what are you seeing today that most excites you in terms of applications?  

Al: It’s interesting that you guys are focused in the medicine and health space. I have been involved quite a bit recently with companies in that space. 

One of them is WISEcode. I sit on their board, and they’re a nutrition data company. And what’s interesting there is all the nutritional information that we make all our decisions about what we eat and buy and shop for is flawed. There’s only 14 pieces of data legislated to be on that package, and they’re plus or minus 20% accuracy, and that’s okay. 

They use averages for, “Oh, we’ve got flour.” We might know that it’s a special kind of flour, like in a bagel that’s higher in protein or starch or something, but the FDA is okay if we just report based on an average blend of flours, and so you’re getting a lot of incorrect or incomplete information and it’s all at macro level, there’s no micronutrient information. 

What they’ve done that’s amazing, I think, is test all of the ingredients that go into food that gets manufactured in America and then build up a database where they can use AI to drive full nutrient profiles for products without actually testing the product fairly accurately. So you can have a much more complete view of what’s in a product that you’re putting into your body. 

It’s a really unique approach using both data science and AI. I find that really fascinating because it’s the first time, I think, and they haven’t publicly launched a consumer product yet. But when they do, I think it’ll be the first time people will have had access to that level of data. 

I think it’ll be interesting to see what sort of a movement change it creates back into food manufacturing. Once people have the power to say “Oh no, I’m not going to buy that anymore.” “Oh what would you buy? We better change our formulas.” There are more advanced things that are going on that I’m really interested in that I’m not directly connected with though. 

There’s a cancer treatment company out there that has taken data from tens of thousands of courses of chemotherapy and has all of the patient profiles of their DNA and a lot of other information, and they’re able to build a model that will predict personally for the individual. 

It’s all about personalized health rather than generalized health, which course of treatment for them will be higher probability of being successful in combating that cancer. And proven to be because they’ve been at it long enough. To me, that’s fascinating because right now everything’s more general. “80% of people respond really well to this drug, so we’re going to start there.” 

Aaron: Yeah.  

Al: But what if I’m not 80% of people? What if I’m 1%? How would you identify that? What if you knew more about my DNA, my history, my family history, what data is missing? If you had that data, you could say, “Actually, yeah, you’re right. That 80% is the wrong course of treatment. We’re going to put you on this other one and you’re more likely to survive.”  

That’s really interesting as an application, and in medicine, if you take that model and apply it to all diseases, or more broadly human health, you can see where large data and AI is going to start to really change the way we read imaging, read blood tests, look at all of the things that we have done in the past. 

Now, human doctors interpreting being interpreted by trained large models and giving that interpretation to the expert to bring back to you is going to greatly improve outcomes, I think, in courses of treatment.  

The Future of AI in Healthcare

Aaron: Let’s remove the constraint of a time horizon. What is your vision for the role of AI? And let’s constrain it just to, let’s say, healthcare and medical device to the health space. What role does AI play in the future?  

Al: It’ll replace a number of traditional roles in diagnosis, imaging, testing, et cetera. Right now, there’s a lot of human in the loop involved in conducting the test, like steering. When you’re getting your scan done and measuring on a screen and making notes, it’s error prone and they miss things. 

I think some of those roles will change or go away. I think everyone will become an expert because they’ll have the expert systems to help them interpret, and the role of the doctor and the expert or the specialist will start to change because they’ll have more complete and deterministic information about you and course of treatment. And there’ll be more about administering that than diagnosing it in the first place. That burden that moves off of the technicians and the support staff and the doctors and the surgeons and onto the computers, the AI, the robotics in the operating room, that will actually have a hard time envisioning what the role of the physician will look like enabled with all that technology. 

But it’s definitely very different than it is today.  

Aaron: Yeah. What constraints exist today that are in the way of reaching that future? 

Al: Similar to self-driving, I think humans have a hard time getting out of their own way. So there probably already are things where we could use more automation, but people are afraid to because what if it made a mistake? 

Humans make mistake every day in the operating room. But it’s like the, if there’s one Tesla accident, it’s all you hear about on the news, but the 12,000 accidents every day, they’re not even a footnote. And it’s interesting, the bar is extremely high for technology to replace humans, and then it’s highly politicized as well. 

So I think humans stand in the way, just our natural behaviors and tendencies to cling to the way things are and want to keep them that way. The comfort data will take time to amass for a lot of these. I think in a lot of ways, I’m hearing that some of the medical record keeping and data laws that came in the late teens, maybe in the Obama era, have enabled a lot of interesting technologies that weren’t envisioned when those that wasn’t the purpose of the laws. But they’ve helped with that. And I think the record sharing in a pseudonymized or anonymized large data availability, et cetera, will continue to improve in the medical space. 

And as it does that, we’re going to see more and more applications of that data to things that will benefit human health.  

Aaron: Do you think that, is there a technical constraint, a hardware constraint, for example, that is in the way of these sorts of advancements 

Al: Full disclosure, I haven’t thought deeply about it, but just as you say that, obviously you start to think about things like nanobots and embedded devices. 

And really microcomputers that can operate at the blood level, continuous monitoring of your organs, things like that. I can’t even imagine what that looks like in the brain. But yeah, I’m sure there’s, the stuff of science fiction is what stands in the way.  

Aaron: Yeah. Is there a hardware constraint in terms of, so much of this depends on data strategy and on the aggregation of data required to inform, train the models, and then continue to help them to derive the insights that they need going forward. Is there a constraint in terms of the infrastructure that would house the data and operationalize that data, or are we at a point where the evolution and the advancements in hardware are so continuous and predictable that we think that’s just going to continue to scale and the solution will be there when we need it? 

Al: You think about all that data, a place to put it and then all the governance that has to layer over top of it because you’re dealing with medical data, you’ve got to worry about HIPAA, you’ve got PII, and then in Europe, you have a whole other animal with GDPR. I would say the systems exist to securely capture at scale and affordably manage and retain the data and share it if the laws and the businesses involved are willing. 

So I feel like there’s no technological barrier to the data side of that necessarily.  

Bridging Experience to Startups

Aaron: Tell me a bit about Techquity.  

Al: Since leaving Amazon, I’ve been involved in advising, doing board work and trying to stay plugged in. Part of my motivation was to go back to small companies. So Amazon was a small company when I joined. 

In my opinion, anyways, they told me the tech team was 500 people, and I think the company was around 4 billion in revenue at the time, but it was not a very large organization. By the time I left, we were 1.5 million people. Different scale. Earlier in my career, I went through a similar cycle. 

I started out at Northern Telecom, Bell-Northern Research, massive company that doesn’t exist anymore. I left pretty early in my career because it was slow moving and bureaucratic and we’re rolling out ISO 9001, which was really painful. And it was big, fat and slow, but the nineties were an opportunity with the deregulation. 

So all these startups sprang up to try to move in and fill the void, an absence of a telecom management software, et cetera. So I worked a string of startups in the late 90s and early 2000s until the dot com bubble burst took away all those opportunities and drove consolidation in the telecom sector and the would be customers all went away and the money all dried up.  

I really enjoyed working in startups. I led teams in small software companies, building network management software and OSS software. When I came out to Amazon, that was a continuation of working in small scrappy startups. 

That was 2004, and I wanted to get back to that and get involved with startups again and see if I could take everything that I’d learned over the course of a 30 year career and use it to help executives who are leading fledgling startups to avoid some of the mistakes that they make. I thought it just sounded like it would be fun. 

So I started doing it on my own, and I thought it was lonely work. And I accidentally ran into some folks, a neighbor of mine, who’s ex-Microsoft, Anthony Bay. He was at Amazon. He was CEO of Rdio, before they sold to one of the other streaming services. He had built this brand Techquity, and it was four or five guys, and they were all similar to me. 

They had done a lifetime of leading large tech organizations and big tech and in startups. And they had stepped out of operating roles and wanted to, part-time, work with companies to help leverage all that they had learned to have an outsized impact on the outcomes for those companies. 

And so Techquity, I think was a play on the idea of in a VC firm, they put equity in the form of money in tech equity. We put it in the form of experience in technology.  

Aaron: Yeah.  

Al: We’re all tech execs, and the theory is every company is a tech company. If you’re not, you’re not going to be around for long. 

Your primary business might not be technology, but you need to understand technology and apply it correctly outside of Silicon Valley in Seattle. You’re not going to hire a CTO that’s a Silicon Valley CTO to lead your technology organization. So getting time with folks like us is extremely valuable and almost unobtainium for the other 98% of business. 

So we would like to come and be your copilot and help you look around those corners, apply those patterns, recognize your scaling challenges, and have an outsized impact on your business. Your business being successful as a result. And by banding together, I mentioned it was lonely. Now I have the former CISO of Azure and former, serial entrepreneur who took multiple companies public and each of them have different strengths: finance, security, technology, architecture, leadership, scaling. When we approach a client now, the solutions that are available to them across that diverse set of folks is really valuable, we believe. We’re early in getting started. We’re 10 or 12 folks, but we all have kind of the same pedigree of background.  

Aaron: Yeah. That sounds like exceptional advice and an exceptional resource to draw upon. So if people wanted to get in touch with you through Techquity, how would they do that? 

Al: Yeah. You can reach me, Al at Techquity, and it’s techquity.ai. 

Aaron: That’s great. We’ll include that in the show notes as well. All right. I’ve really appreciated talking with you. It’s been a lot of fun. 

Al: Thank you. It was fun for me too. 

Related Links

Check out Techquity.

Contact Al Lindsay: al@techquity.ai

Questions or comments

Please let us know if you have questions or comments about this episode by emailing Grace Johnson at grace@wheelhousedmg.com. Want to be a guest on a future episode? Fill out the Be a Guest form at the top of the Digital Clinic page to submit your inquiry.

Make sure to subscribe to the Digital Clinic on Spotify, Apple Podcasts, or Amazon Music to stay up to date on our weekly episodes.

Contact Us
Please enable JavaScript in your browser to complete this form.
Name

Contact Us
Please enable JavaScript in your browser to complete this form.
Name

HIPAA Compliance
Please enable JavaScript in your browser to complete this form.
Name
Description of the image