Q&A: The Environmental Impact of Generative AI | MIT News

 Vijay Gadepally, a senior research scientist at MIT Lincoln Laboratory , leads several projects at the Lincoln Laboratory Supercomputing Center  (LLSC) that improve the efficiency of computing platforms and the artificial intelligence systems that run on them. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its potential impact on the environment, and some ways Lincoln Lab and the larger AI community can reduce emissions for a greener future.

Q: What trends do you see in the use of generative AI in computing? 

A: Generative artificial intelligence uses machine learning (ML) to generate new content, such as images or text, based on data fed into the ML system. At LLSC, we design and build one of the largest academic computing platforms in the world, and over the past few years, we have seen a surge in the number of projects requiring access to high-performance computing for generative AI. We are also seeing generative AI transform a wide range of fields and scopes. For example, ChatGPT is impacting classrooms and workplaces at a pace faster than regulations can keep up. 

Over the next decade or so, we can envision many applications of generative AI, including enabling more powerful virtual assistants, developing new medicines and materials, and even improving our understanding of fundamental science. While it’s impossible to predict all the uses of generative AI, we can be sure that its impact on computing, energy, and climate will continue to grow rapidly as algorithms become increasingly complex.

Q:  What strategies is LLSC using to reduce its climate impact? 

A: We are constantly looking for ways to make computing more efficient so that data centers can make the most of their resources and scientists can advance their fields in the most efficient way possible. 

For example, by making simple changes such as dimming or turning off lights when you leave a room, we reduced the amount of power consumed by our hardware. In one experiment, by applying power caps, we reduced the power consumption of our graphics processor group by 20-30 percent with minimal impact on performance. This technique also reduces the operating temperature of our hardware, making the GPU easier to cool and more durable.

Another strategy is to change behavior to be more climate conscious. At home, people might choose to use renewable energy sources or smart schedules. LLSC is using similar techniques, such as training its AI models for times when temperatures are cooler and local grid energy demand is lower.

We also found that much of the energy spent on computing is wasted, just as a water leak increases your electricity bill but does no good to your home. We developed several novel techniques to monitor the execution of computational workloads and enable us to terminate those that are unlikely to produce good results. To our surprise, we found that in some cases we could terminate most of the computation early without affecting the final result.

Q:  Can you provide an example of a project you have undertaken to reduce the energy consumption of your AI programs? 

A: We recently built a climate-aware computer vision tool. Computer vision is a field focused on applying AI to images, which can help distinguish between cats and dogs in an image, accurately label objects in an image, and find components of interest in an image. 

Our tools incorporate real-time carbon telemetry, generating information about how much carbon is being emitted from the local grid while the model is running. In response to this information, our system automatically switches to a more energy-efficient version of the model, usually with fewer parameters, when carbon intensity is high, and to a higher-fidelity version of the model when carbon intensity is low.

Doing this reduced carbon emissions by about 80% within 1-2 days. We recently extended this idea to other generative AI tasks such as text summarization with similar results. Interestingly, we even saw performance improvements after using our technique.

Q: As a user of generative AI, what can we do to reduce our climate impact? 

A: As consumers, we can demand more transparency from AI providers. For example, Google Flights shows us multiple options that show the carbon footprint of a particular flight. We should get similar types of measurements from AI-generated tools so we can make conscious decisions about which products or platforms to use based on our priorities. 

We can also work to better understand AI emissions in general. Many of us are familiar with automobile emissions, so it may be useful to discuss AI emissions in comparison. For example, people may be surprised to learn that an image generation task is equivalent to driving four miles in a gasoline-powered car, or that the energy it takes to charge an electric car is the same as the energy it takes to generate about 1,500 text summaries.

Customers are often willing to make trade-offs if they understand the impact of those trade-offs.

H: What do you think about the future? 

A:  Mitigating the impact of AI on the climate is one of those challenges where people all over the world are working on similar goals. We’re doing a lot of work at Lincoln Lab, but this is just the beginning. In the long term, data centers, AI developers, and power grids need to work together to provide “energy audits” and explore other unique ways to improve computing efficiency. More partnerships and collaborations are needed moving forward. 

If you would like to learn more about these efforts or are interested in collaborating with Lincoln Laboratory,    please contact Vijay Gadepally . 

Để lại một bình luận

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *