Applied Podcast – Ep. 8
AI, Data Centers, Energy, and Environmental Impact
Featuring Dr. Sudeep Pasricha
As they invest in new artificial intelligence technology, corporations like Google, Microsoft, and Amazon continue to build large data centers, which consume vast amounts of energy. Meanwhile, researchers are looking for new ways to increase the operational efficiency of these centers and reduce their environmental footprint.
Listen to Applied on:
- Learn More
- Related Programs
- Transcript
- “SHIELD: Sustainable Hybrid Evolutionary Learning Framework for Carbon, Wastewater, and Energy-Aware Data Center Management” | S. Qi, D. Milojicic, C. Bash, S. Pasricha | IEEE International Green and Sustainable Computing Conference, Toronto, Canada, 10/23. (Best Paper Award)
- “Robust Perception Architecture Design for Automotive Cyber-Physical Systems” | J. Dey, S. Pasricha | IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022
- “Efficient Embedded Machine Learning Deployment on Edge and IoT Devices” | S. Pasricha | IEEE/ACM DAC, 2023
- “Cross-Layer Design for AI Acceleration with Non-Coherent Optical Computing” | F. Sunny, M. Nikdast, S. Pasricha | ACM GLSVLSI, 2023
- “Ethical Design of Computers: From Semiconductors to IoT and Artificial Intelligence” | S. Pasricha, M. Wolf | IEEE Design & Test, 2023
- “AI Ethics in Smart Healthcare” | S. Pasricha | IEEE Consumer Electronics, 2023
Beren Goguen (00:00):
Welcome to Applied. I’m Beren Goguen. My guest today is Dr. Sudeep Pasricha, a professor with joint appointments in the departments of electrical and computer engineering, computer science and systems engineering at Colorado State University. Dr. Pasricha is also director of the Embedded High Performance and Intelligent Computing Lab. In our conversation, we discussed the rapid ongoing evolution of artificial intelligence, especially as it pertains to computer engineering, data centers, energy sustainability, and AI ethics. Thanks for tuning in.
Dr. Pasricha (00:33):
It’s a pleasure to be here, Beren.
Beren Goguen (00:34):
Can you tell us a little about the research that you’re currently working on at Epic?
Dr. Pasricha (00:39):
Absolutely. So at the EPIC Lab at CSU for the last 15 years or so, we’ve been working on three tracks of research. The first has to do with many core computing or electronic chip design. Some of the work that we are doing lately in this area has to do with designing new accelerators for AI and machine learning, using paradigms such as optical computing or in-memory computing, which overcome some of the biggest drawbacks of conventional CPU and GPU platforms, which relates to data movement and just fast processing. The second track of research has to do with embedded and iot system design. So here we are looking at more applied algorithms and architectures for applications that are of relevance, for example, autonomous vehicle design. And so we develop algorithms and architectures for autonomous vehicles that are emerging and then also emerging applications like indoor navigation. We are developing approaches for that. And then the third and last track that we are focusing on has to do with sustainable data center design. And so we have again, been partnering with industry and also developing new mechanisms to improve sustainability, to reduce carbon footprint, reduce energy overheads of doing cloud computing or large scale AI training in data centers.
Beren Goguen (02:05):
Okay, that’s really interesting. I know that data centers are growing and expanding and they’re adding more of those because of the AI boom, as I imagine has a big effect on that. What are some of the tools and strategies that you’re investigating to make them run more efficiently?
Dr. Pasricha (02:24):
Yeah, this is an excellent question. So we are looking at ways in which we can better perform the work in data centers relating to AI and also other applications as well. For example, one direction that we are looking at has to do with geo distributed decision-making. Today, companies like Amazon have hundreds of data centers that are spread out all over the globe, and you can actually exploit some of that geo distribution of data centers to reduce the environmental impact of performing any sort of computation, be it AI or something else. For example, we can think about if we wanted to train an AI model, we could consider training it at a location where there is more renewable energy available at a given time. For example, at a location where the sun is out and there is potentially more solar energy that’s available, and we can move computation around to different sites depending on this criteria alone.
(03:23):
However, we can also make other optimizations. For example, there might be places where the carbon intensity of electricity generation changes over time. If it’s high, that means that electricity generation is relying more on dirty or brown energy as opposed to green energy. And so we could potentially, again, move to places where the electricity generation is less dirty and opportunistically perform our work. So it’s related to renewable aware energy based migration, but this doesn’t necessarily rely only on renewable energy if you want to save costs, we can also think about moving to places where, for example, you’re currently not in a peak power part of your day where you’re not paying the peak of power costs. And so again, you can save costs if you move work around to different data centers. So there’s actually many opportunities to save money to reduce environmental footprint by intelligently making decisions at this geo distributed data center scale.
(04:33):
So we have done quite a lot of work over the last decade in coming up with intelligent AI-based algorithms and other types of techniques to support these decision-making processes. We’ve also done quite a bit of work on looking at just a single data center and more holistically modeling its impact in terms of its environmental impact. For example, some of the earliest work that we’ve done in some of the earliest work period more than a decade ago that we did had to do with quantifying the cooling energy in addition to the computation energy and how we can come up with techniques to save both of these types of energy. And the techniques there are a bit different across these two domains. So this is a couple of examples of the types of techniques that we are doing. Basically a lot of algorithm design, but a lot of hardware based characterization and modeling together.
Beren Goguen (05:27):
I imagine that keeping these places cool I know is a big factor and has a big impact on the cost. Are companies looking to build more data centers in the north where it tends to be cooler and easier to keep cool than in the south, or is that something that’s really being looked at?
Dr. Pasricha (05:45):
Yeah, absolutely. So companies are absolutely looking at saving costs relating to cooling, and so they definitely want to position their data centers maybe close to a river where they can extract a significant amount of water to cool the data center or in cooler climates where the temperature is in general lower than in hot climates.
Beren Goguen (06:10):
Are companies currently using some of your research to and applying that to make improvements or is it still sort of a new area?
Dr. Pasricha (06:19):
So a lot of the research that we’ve done in the sustainable data center and high-performance computing space has actually found applications in different avenues. For example, we collaborated with Oak Christian National Labs and the Department of Defense that was starting almost more than 10 years ago, and we developed algorithms for energy efficient scheduling and resource management in their large scale data center facilities. And they did apply a lot of our techniques that were developed in our lab to their high performance computing clusters and data centers. We are also currently working closely with HP Labs, and they’re very interested in some of the algorithms we’re developing for geo distributed decision-making in data centers. And so we actually have been working with them for the last few years in taking some of the algorithms we are developing and applying it to their data centers and clusters. In fact, a lot of my PhD students have graduated and gone on to work at HP Labs and they’re actually taking some of the research they did in our lab and they’re applying it at HP Labs.
Beren Goguen (07:26):
That’s great. It’s so important and beneficial when the research can be brought out and applied in industry obviously and has an impact. That’s really great. So artificial intelligence is obviously a big topic right now. A lot of people are talking about that over the past year. What is something that our listeners might not be aware of as it pertains to the current AI boom, something that’s maybe not talked about as much?
Dr. Pasricha (08:01):
So there’s a lot of facts I can share with you about the hidden costs behind developing some of these large models like chat GPT that we all are using more and more today. For example, it takes several millions of dollars just in terms of the energy costs to train and develop these models. Each model, it takes millions of dollars just for the energy costs alone. But in some cases, if you think about the entire ecosystem around the model, including gathering the terabytes of data, coming up with the compute platforms that need to perform the massive amounts of computation, and then thinking about the people that are involved and their salaries and the facility and its costs, the cost can actually be hundreds of millions of dollars for each of these models. And so there’s a massive cost involved and there’s also a massive environmental impact of these models.
(08:57):
For example, even when we use chat GPT every time we are involved with a session with chat GPT and we suppose have interactive prompts, maybe a few tens of prompts, the environmental footprint from a water use perspective at the data center that this chat GPT would consume about 16 ounces of water for this one session that you’re having with Chat GPT. So this is just in terms of water use, but the actual electricity use and other costs can add up pretty quickly. So I don’t think we think about a lot of these environmental costs when we are using a lot of these emerging AI large language models and foundation models.
Beren Goguen (09:38):
That’s good to know. I know that is a topic that some people will talk about and think about, but others maybe not aware just how much energy and cost is involved. So I imagine a big part of the innovation in that industry is looking at how to make it more efficient, how to not just cut costs but also reduce the impact on the environment.
Dr. Pasricha (09:59):
Well, there is certainly awareness today of the environmental impacts of training and developing these models. However, it’s not as much of a priority to reduce the environmental footprint in a lot of the companies that are actively focusing on developing these models. There’s certainly interest in doing so, and there are certain incentives to do so, for example, the economic incentive to reduce the cost, the massive costs associated with these AI algorithms. But in general, I think a lot of companies are today just struggling to come up with models that can be financially viable. And today the economic case is sort of more of a pressing concern for these companies rather than the environmental aspect and overhead of these models.
Beren Goguen (10:47):
Do you think the efficiencies gained from AI as a technology overall could offset the infrastructure and energy demands that AI creates or do you think it’s going to be running at a deficit?
Dr. Pasricha (11:01):
That’s a great question. I do think, yes, to a large extent AI can offset some of the overheads that is created by ai, but there is an important distinction here between the AI involved in these aspects. So let me go a little bit deeper into that. So the AI models that are the poster child of consuming huge amounts of water or energy are massive foundation models. These are, for example, large language models or multimodal models that work with different types of data, video, text, images and so on. And these are really massive models. We are talking about billions of parameters inside these models. They take up terabytes just to store on a disc, but then they also require hundreds of terabytes of data during their training. And the training itself can last for weeks, two months. And so there’s a massive footprint involved with these sorts of AI models.
(12:01):
And if we think about the environmental footprint, there’s again massive carbon emission overheads. There is water use overheads. There is of course the energy costs associated with it as well. And so these sort of models are certainly what a lot of us are thinking about in terms of their environmental footprint and how to reduce them. Generally what ends up happening is that these AI models tend to run on data centers or large high performance computing facilities. And these data centers typically are, there’s a huge number of these data centers today that support not just AI model training and development, but also other types of applications. So for example, all of the billions of internet of thing devices that we rely on are smart thermostats or IP cameras or our phones whenever we use social media, all of the services provided are posted on data centers and software running on these massive data centers.
(12:57):
Now, these data centers are themselves massive consumers of electricity and they have a massive environmental footprint as well. So today it’s estimated that data centers consume about maybe two to 3% of the global electricity. And this number, this percentage is expected to go up to 4% or higher within the next few years because of this boom in IOT and connected devices and also AI. And so it turns out that there’s been some work that has been looking into using AI to reduce the environmental footprint of data centers. And in fact, in our research lab at the Epic lab at CSU, we’ve been doing quite a bit of work along these lines as I discussed a few minutes ago. These algorithms, however, are not the same as those large language models or foundation models. These are much simpler AI algorithms that can nonetheless monitor the data center as it’s operating and come up with intelligence strategies to be able to reduce, for example, cooling energy.
(14:03):
There’ve been some very interesting instances. For example, even beyond the findings from our research, companies like Google have shown that they can save around 40% of cooling costs with the help of interesting AI algorithms that are monitoring and making decisions inside their facilities. So I do think that the exact trade-off here between these large language models, these massive AI models and their environmental footprint and their energy demands and the savings that we can get from these smaller AI algorithms, they may even out, but it’s hard to tell. But I do feel that there’s a lot of scope, especially since data centers are used for a lot more than just AI algorithm training. And if you think about all the bitcoin mining and all the iot billions of iot devices that are supported by data centers, even a few percentage, a small percentage of reduction in the energy footprint of data centers, it can mean a significant reduction in their environmental footprint.
Beren Goguen (15:07):
I wonder then if machine learning could be used to increase efficiency of green energy technology, like the systems that monitor and run windmills or solar farms and those types of things, perhaps machine learning could be used to increase efficiency there that’s generating more clean energy that could then be used to offset some of that.
Dr. Pasricha (15:33):
Yes, absolutely. I think there’s a lot of interest in using AI and machine learning, for example, in the smart grid or for example in smart homes to better manage the energy use inside our homes and inside different types of facilities. Smart buildings also can benefit from these sort of AI and machine learning models. So you’re absolutely right.
Beren Goguen (15:56):
And I also imagine too, if we do get to the point in the next decade or so where autonomous vehicles are safer and more prevalent, vehicle sharing becomes more prominent where people don’t feel that they need to have their own car. They can just have a car come and pick them up. They don’t even need someone driving it. They can take it to the store and then get dropped off. And if that technology can be implemented in a smart efficient way, I imagine there could be some significant benefits to the environment
Dr. Pasricha (16:27):
There. Yes, that is an excellent point. In fact, that is one of the selling points, and they make driving a lot more safer. They can cut down the number of accidents that happen on the road, but they can also lead to better path planning and more efficient usage of our vehicle fleets. And absolutely there’s huge benefits that can be had, which is why there’s a lot of excitement around AV’s.
Beren Goguen (17:00):
You did mention that your lab is working on autonomous vehicle research. Could you tell us a little bit about what you’re looking at specifically?
Dr. Pasricha (17:08):
Sure. So our interest in autonomous vehicles really started from about a decade ago. We started working with the Department of Energy as part of their EcoCAR project. CSU was one of the around 12 universities that was selected in North America to be part of this project where they actually delivered a car to us and they asked us to make it more energy efficient or more so in our department in electrical computer engineering. We had our undergraduate and graduate students work with students from mechanical engineering systems engineering. And we basically took out the engine, or at least the mechanical engineering students took out the engine. They put in interesting alternatives with hydrogen cells or hybrid engines. And in our department we looked at more of the smart perception architecture design aspect of these vehicles. How could we more accurately and more robustly detect what’s on the road?
(18:05):
How can we detect pedestrians traffic signs, and then how could we modulate the vehicle and how it’s operating so that it could operate in an energy efficient way? So we developed control strategies, we developed algorithms for changing the way the car accelerates or de-accelerates when it’s coming close to a red light or a stop sign. And then all of that was driven by another set of algorithms that we developed that relied on data from cameras and radars and lidars in trying to perceive accurately the environment around the vehicle. And so we’ve continued along those lines and our more recent work has been looking at a more holistic set of problems relating to how can we not only perceive the environment with multimodal data from cameras, lidars and radars, but then how can we also optimize the AI algorithms that are being used to process the data simultaneously with trying to figure out what are the best sensors to use in any given scenario and where do we even mount these sensors on a given vehicle, which are all very important decisions that we need to make if you want to come up with an efficient perception solution.
Beren Goguen (19:17):
Absolutely. My understanding is that the technology of what people would call computer vision or the ability for computers to perceive the real world has made some advances in the last few years and is continuing to advance. Do you feel that we’re getting closer to the point where computers and algorithms and the technology that we’re using can better understand our real world and interact with it safely?
Dr. Pasricha (19:43):
So that’s an interesting question. So I think what you’re alluding to is artificial general intelligence as practitioners in the area talk about it. And we are still quite a ways away in my opinion, from that sort of artificial general intelligence, intelligence that’s really able to make sense of more than what it’s seeing or put it in the proper context of a broader setting. Today with the help of machine learning algorithms, we absolutely have started to outperform humans in terms of perception, in terms of accuracy for video classification, also image classification. And so these sort of solutions are certainly much better today than what humans are capable of. But in terms of their general ability to understand the environment, I would say we still have some ways to go.
Beren Goguen (20:36):
Right, and of course, understanding is kind of a human term and it’s complicated when you talk about understanding or perceiving the world, that can mean multiple things. That means visual data, audio, data, spatial data. There’s so many different things that we take for granted as humans with eyes and ears and senses that obviously computers don’t work the same way. So it’s easy for us to confuse the way computers work with the way we perceive the world. And so there’s still a disconnect there in terms of that functionality, right?
Dr. Pasricha (21:16):
Yeah, absolutely. And I think we are now starting to talk about some philosophical questions about what does it even mean to comprehend reality and what is reality and so on. And so these are questions interestingly enough that AI developers are starting to grapple with because they are starting to try to build AI systems that mirror the human brain and then eventually go beyond that. But we still are not there yet. We still have not entirely understood how our brain works. We understand some mechanisms in terms of how, for example, auditory or visual information is processed by different neurons in different parts of our brain, but we don’t entirely know how memories are formed or how concepts are formed. The plasticity inside our brain, the stability inside our brain for concepts that are stored and how all of that changes, we are still grappling to understand how those mechanisms work. So we are not there yet in terms of building AI models until we actually can get a good understanding of the human brain.
Beren Goguen (22:31):
There’s a lot of speculation as you know about if and when we might achieve some form of AGI. Some people are saying it could be in the next five years, some people are saying it’s going to be decades. Do you care to offer your thoughts on when or if you think that could happen?
Dr. Pasricha (22:51):
Yes, I can give you my thoughts on it. However, I will also say that I’m not sure there would be accurate because making predictions of this sort, it’s always tricky. So I would say that we are getting close to this point, but yet the closer we get, the further we move away from what we understand to be true AGI. And so I would say that we are maybe about at least five to 10 years away from getting anywhere close to AGI when we interact with these chatbots, these large language model based chat, GPT and so on, we may actually feel like we are interacting with another human. But to be human is more than just to be able to respond to queries. And in fact, these AI solutions also don’t really respond accurately all of the time. And I do think that this whole notion of artificial general intelligence is closely tied to philosophical questions about what it means to be alive, what is sentience, and what that looks like in its different forms. And those are tricky questions, very difficult questions to answer. So I can only venture a guess five to 10 years before we see models that are able to potentially be embedded into maybe some sort of android or robotic form where we can say that it emulates a human more closely than any of the systems today. But whether it would still be a sort of a sentient entity is hard to say.
Beren Goguen (24:28):
Right. And in science fiction, there’s a tendency to want to anthropomorphize computers and have an Android body, whereas in computer science that you don’t need that at all. So you could have an AGI that just lives on computer servers without a body and it would be functioning theoretically like a human, but living in a box essentially. What do you think these companies who are striving toward this should be doing to keep everyone safe or I think one of the main concerns that people have is obviously something like this gets out into the internet somehow or is somehow able to roam around freely. Is that just science fiction? Is it pretty easy to keep something like that contained if they did create something like that?
Dr. Pasricha (25:20):
Yeah, this is again a very interesting question. So I do think that we need to think very carefully about the ethical issues around AI and the way that we are thinking about using it. In fact, I’ve been giving several talks about the implications of using AI in different domains. Most recently, I gave a talk at a conference a few weeks ago where I talked about how AI is being used more and more in the healthcare domain and how there are many challenges associated with AI in this healthcare domain. So as we integrate these AI algorithms into, for example, wearable devices, now we have issues relating to potentially data privacy. A lot of this data that’s being collected is being shared with a lot of entities that we may or may not know about. There are issues relating to the decisions that will be made with the data that’s collected, and those decisions oftentimes tend to be biased.
(26:23):
As we have seen in recent years with AI algorithms. The data sets have a significant impact on how the AI models behave, the data sets that we train these AI models on, and a lot of times we have not taken great care in ensuring that there is equal in our data sets for the domains that we care about. And as a result, these data sets are skewed as a result. The decisions made by these AI algorithms are skewed and they’re biased and they can hurt certain populations, which is unfortunate. So there are aspects of bias, there are aspects of privacy, there are aspects, many other aspects relating to things like transparency, where to what extent should we as consumers know how these AI algorithms are being used to make decisions that are impacting our lives? Right? As a patient, should you be told in clear terms how your doctor is using these algorithms?
(27:27):
And there are issues relating to security. A lot of these devices are not very secure, so they can be hacked and they can be hacked more easily than a lot of other types of systems. And for example, if you think about smart pacemakers or embedded devices in the human body that are starting newer types of devices that people are starting to implant, they can be susceptible to attacks these sort of attacks. We need to think about the implication of designing systems where attacks are a given, but they should not cause harm. There should be some fail safe. These systems need to be designed quite carefully. So there are many concerns relating to the use of AI across domains, and we are starting to think about them in domains like healthcare, but we in general also need to think about AI use in autonomous vehicles or in other spheres of life where these algorithms are going to have an impact on our lives and just the way that we go about our everyday lives.
Beren Goguen (28:36):
Yeah. When you get into the topic of AI ethics, it seems like a lot of companies are at least paying lip service to that, but I often question if they’re really investing the resources necessary into AI ethics and safety versus just innovation and technology, do you feel that they might probably need to do a little more investing in that area?
Dr. Pasricha (28:59):
Yes. I absolutely think that companies do today pay a lot of lip service to care about. They indicate that they care about ethical concerns. However, a lot of these ethical concerns are not even clear to the folks developing the algorithms, let alone the people who are going to be using these algorithms in completely new contexts that we have never experienced before. So I absolutely think that a lot more needs to be done. And in fact, some of the recent work that I have published talks about what companies can specifically do to improve these sort of concerns with AI models and their impacts. I’ve talked about in some of my recent articles, how we can do a lot more and do things differently for AI in smart healthcare systems, in ai, in general AI systems, as well as thinking about some of the ethical concerns holistically for the entire ecosystem involving not just the algorithms but then also the hardware that these algorithms run on.
(30:06):
And in fact, that’s something that perhaps is also not common knowledge. The fact that a lot of the electronics that is being used to accelerate trading and inference of these AI models, there are some ethical challenges in their development. There are issues, for example, relating to conflict minerals that are used in designing a lot of the electronic chips. There are concerns with toxins and chemicals that have shown to have significantly negative impacts on people who work in semiconductor fabrication plants. There are also challenges with the end of life of a lot of these devices, our mobile devices, IOT devices, but also data center components. When we are done and we recycle them, generally these devices get shipped off to some country where they get burnt to recover valuable metals, but a lot of the toxins and chemicals leach into the groundwater into the air, and that devastates the ecosystems in different parts of the world. So there are many aspects that we need to think about holistically when we think about ethics and ai.
Beren Goguen (31:11):
Absolutely. It’s so easy to not think about what happens in another country, but the impact is not distributed equally across the globe. So that sounds a little bit related to my next question, which is do you feel that there is a disconnect between computer scientists and computer engineers or hardware engineers when it comes to scalability and safety of AI?
Dr. Pasricha (31:34):
I would say yes, there is a bit of a difference between how these challenges are perceived by computer scientists and computer engineers. So I actually work a lot with computer scientists and I’m a computer engineer, but my PhD is in computer science, so I’m very well enmeshed in both of these communities. So I do get the sense that in computer science there’s a lot more emphasis on looking at the software aspects of these systems. And of course these challenges with scaling of AI are known to the software community to computer scientists, and they are working really hard coming up with creative solutions to scale up these algorithms so that they can solve more complex problems and be more effective in different scenarios. But I do think that if you think about sustainability and if you think about trying to do something about the challenges associated with the environmental footprint of AI and data centers associated with these AI algorithms, computer engineers do have an edge because computer engineers generally have a more holistic understanding of both the hardware and the software artifacts that are involved in these systems. And I truly believe that if you want to address sustainability, you cannot do it just from a software perspective or a hardware perspective alone. You have to look at both of these artifacts together. And I think effective solutions like some of the ones that we have proposed in our research lab, do combine aspects of both hardware and software and optimize both of these to be able to achieve holistic goals more efficiently.
Beren Goguen (33:21):
What is one thing that the general public should be more aware of when it comes to AI data centers or the field of computer engineering?
Dr. Pasricha (33:30):
So I would say that I think we should all know that data centers are what are supporting our massive digital revolution today. All of our cloud services, all of the IOT devices running these services depend very heavily on these data centers. And increasingly we are putting AI algorithms into these devices and data centers are increasingly being used to train these large AI models. So there is a massive environmental cost associated with AI and data centers, and I think we cannot ignore that anymore. I think we need to do something urgently about the environmental aspects of both AI algorithm design and penetration in society today, as well as the increasing use of data centers. I will also add that I do think that computer engineers do have the right set of skills to be able to deal with these challenges in a more holistic way. Looking again at both the hardware and the software, which are fundamental components inside this AI ecosystem and also the cloud data center ecosystem.
Beren Goguen (34:37):
What concerns you the most about the future of our technology? If you’re looking out into the next 10 to 20 years,
Dr. Pasricha (34:48):
I would say I have two major concerns. The first has to do with the negative impacts of technology on humanity. And this is a topic that has been covered in a lot of movies and TV shows and books for a long time now. But now that we are starting to see these AI algorithms being more and more involved with the solutions that we use on an everyday basis, I do feel that there are some concerns that we need to be aware of. For example, a few minutes ago I talked about bias. I do believe that as we start using these algorithms, more and more bias because of shortcomings in the development of these solutions is going to start having a more greater impact on our society and it will impact individuals that are the most vulnerable. And so we absolutely need to be thinking about this.
(35:44):
There’s also going to be some growing pains. So as for example, AI and technology start penetrating different spheres of our lives, we will see, for example, a change in the way that our job markets are structured. We may see a significant amount of jobs lost. Just to give you an example, autonomous vehicles are expected to cause a significant reduction in job opportunities for truck drivers. Today we have more than 3 million truck drivers in the US and there’s additionally more than 5 million people employed. In addition to that, in the trucking industry in general, when we start using autonomous fleets of trucks, we will see a significant shift somewhat similar to the sort of shift that we saw when mechanization in agriculture came about over the last century. And just to give you some numbers, around the 19 hundreds, there were about 16 million in the us, 16 million people involved in agriculture in day-to-day activities related to it.
(36:48):
But today, this number is much smaller. It is about maybe between one to 2 million people. And so there’s going to be some significant shifts in the job market relating to the use of AI and technology. And I think we need to be ready for that, and we need to be ready with programs for workforce retraining. And these things are very important. And then of course there’s the issue with deep fakes. And as we start consuming more and more content that’s generated by these algorithms, these AI algorithms, these algorithms can certainly be used by nefarious actors to hurt democracies to hurt and influence elections. And these are things that we need to be very careful about. So I worry about the negative impact on humanity. I also, as I mentioned earlier, am very concerned about the environmental costs of a lot of these technological solutions, particularly the AI-based ones that we are starting to use more and more.
Beren Goguen (37:48):
Okay. So my last question, what excites you the most about the future of our technology looking into the next 10 to 20 years?
Dr. Pasricha (37:57):
So despite all the doom and gloom that many of us share about the negative impacts of technology, I do remain optimistic that technology can also have a significantly positive impact on humanity. So I will say, for example, just coming back to the subject of jobs, and I talked about how jobs are potentially going to change, the nature of the work that we do will change as more and more AI and technology gets integrated into everyday life. But we are already seeing today that there’s a lot of new types of jobs that are being created. Data analysts have been around for a while, but now we are seeing prompt engineers for these large language models, data curators, AI auditors, even ethics specialists that are being employed by companies. So I do believe that there is going to be a shift, but there’ll be a net positive in terms of the type of work that will be created.
(38:54):
Also, another example I can give in terms of the benefits that I foresee, autonomous vehicles, again can significantly save lives that are lost in accidents on the road. So today we have about one to 2 million people dying worldwide due to road accidents and autonomous vehicles are projected to potentially reduce that amount of casualties by a significant number. If you think about healthcare today, diseases like cancer are leading causes of death worldwide. Cancer in fact, is projected to be the second leading cause of death right below cardiovascular disease. But today we are making use of intelligent AI algorithms for drug discovery to come up with approaches to be able to deal with different forms of cancer. And I do feel that there is going to be a lot more interesting outcomes from these sort of studies that are going to help reduce mortality rates all across the globe.
(39:56):
And then the last thing I will say is that I do think that a lot of the technology and solutions that we are developing will also, even though it has an environmental impact, it can also be used to reduce the environmental impact that we have in our everyday lives and companies as well. For example, Google recently had a project where they used an AI algorithm to analyze atmospheric data that would guide airline pilots to use flight paths that would leave the fewest con trails. Con trails are condensation trails that are left behind aircraft. And in fact, they’re something to be concerned about because usually what ends up happening is they reflect the heat from the surface back and that contributes to global warming. This is just one example. There’s also been studies that have shown that AI run smart homes can actually reduce the household carbon footprint by significant amounts by up to 40% in some cases. For all of these scenarios, I do, because of all of these scenarios, I do feel that there is some significant benefits that we can expect from technology, from the environmental aspect as well.
Beren Goguen (41:14):
Dr. Pastora, thank you so much for taking time to talk and I really enjoyed learning more about this topic.
Dr. Pasricha (41:22):
It was a pleasure to talk to you Baron, and I hope anyone listening will hopefully now have a better understanding of some of the challenges that we face with technology, but also some of the promising developments that are happening with the same technology.
Beren Goguen (41:36):
Absolutely. Thanks again. Thanks for listening to this episode of the Applied podcast. If you’d like to learn more about Dr. Pasricha’s research, you can find links to some of his articles in the show notes. And of course, if you’re considering additional education, be sure to explore CSU fully accredited online degree in certificate programs in computer engineering and computer science. Take care.
Explore More Episodes
Ep. 7
Working and Learning as a Social Work Professional in Colorado, with Mindy Van Kalsbeek, M.S.W.
Some of the most challenging aspects of being a social worker can also be the most rewarding. In this episode, we discuss the importance of mindfulness, mentorship, and continuing education.
Ep. 9
How to Get Started with Sustainable Landscaping in Colorado, with Deryn Davidson
When it comes to creating more sustainable green spaces in Colorado, there is no one-size-fits-all method.