51 billion tonnes of yearly greenhouse gas emissions stand between humanity and a climate disaster. According to Bill Gates in his book, How to Avoid a Climate Disaster, we have just 30 years to reduce that number to zero.
People build AI for people. As Mark Twain said, and Energsoft loves this quote, to get something finished, you must just get started. And it is just the starting point. AI is a priority for Energsoft —ensuring our customers and partners use and build safe, reliable, and fair AI products and services. We are developing a recommended Energsoft engineering process for building trustworthy AI systems for battery storage lifetime prediction, anomaly detection, and analytics. Trust in AI systems will depend on whether systems can operate reliably and safely when they function in the world. From the mission-critical requirements of robotic battery for surgery and autonomous mobility to risks of failures and blind spots in systems learning from data, safety and reliability are fundamental to AI systems. These technologies must be designed with benefits and risks in mind to different stakeholders and undergo rigorous testing to ensure they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. Ultimately, people should play a critical role in making decisions about how and when AI systems deployed, and batteries are crucial aspects of the product infrastructure.
For every single employee of Energsoft, the standard lays out four requirements, four things that we want everyone to do. And the first is to learn the principles. What you're doing by taking this training already, you're fulfilling that first requirement. Secondly, the critical thing is to understand the sensitive use cases and to seek guidance and report every single situation that there is a delicate use of AI technology. Third, in every application of AI, whether a sensitive method or not, we must follow their requirements and the standard. And there are requirements laid out for each one of the six principles, as well as specific use cases that we have caught out in the standard. And then, finally, and the fourth key point is everyone is empowered to ask for help at any time. We have defined three sensitive use categories where we want you to think about the implication and application of those as you are engaged in AI solutions. The second category, then, is where the system could create a risk of harm. And it is in this category that we want people to think about both physical injuries, as well as emotional and psychological damages that could be done by the systems related to the energy grid.
We all have a responsibility that, with this fantastic technology that we create, we use the knowledge that we have about the technology and the professional skills that we bring to bear, to look ahead into the future, and try and foresee the impacts of that technology. This work is challenging, and it is significant for all of our prospects, and I hope that everyone just sees that there's a real opportunity to join this exciting work. One of the pieces of guidance that we kind of constituted early on is, just slow down and ask that can versus should question. How could it positively affect the business and the organization, but how could it also negatively affect that organization's customers or constituents.
Will a car or storage battery warranty replace it twice? After six months it stopped working, now two months later after replacement, it stopped working again. Is it only once or not? That means if you bought a battery with a 36-month warranty and you experience failures five times in that period, you might get several replacements. But those replacements will be warrantied only for the duration of the original warranty — if a replacement obtained at 35 months, you are limited to just one more month of guarantee. What if we knew that this is going to happen with the prediction, should we notify the warranty company or user first?
We were making sure that we are intentionally inclusive and intentionally diverse with the approaches that we take towards AI. We want to make sure that that full spectrum of communities is covered, and if we genuinely think about how we can design for the 3%, we can then solve for the 97% at the same time. We need to make sure that they are involved both in their earliest concepts and feature design and planning, but making sure that they are part of our testing. That we are also building with side-by-side individuals to make sure that we are not taking an ableist perspective for the population we intend to serve.
Just think about it. We are the first generation in the history of humanity to endow computers to make decisions that previously have always been made only by people. It is therefore of fundamental importance that we get this right. That we imbue computers with the capacity to think ethically and to aspire to the best of what humanity has to offer. Customers want to know that we are assuming that we are feeling profound that we are implementing these principles across the company and throughout our technology. Our customers want us to share with them what we are learning as we move forward. It matters to the world as well. The people are looking for more than people who invent high technology. They are looking for people who are focused on ensuring that this technology is designed and then distributed and used responsibly. That is what we can achieve together, working across Energsoft. Thanks for taking some time today to participate in this training. Energsoft thinks it concerns one of the fundamental issues of our time.
Fairness relates to not just the system, the technical component, but it relates to the societal context in which the order deployed. And that means that balance in the context of AI systems is a fundamentally sociotechnical challenge. Climate change is not a joke, and it means that we have got to have a greater diversity of people developing and deploying AI systems to get fresh ideas and new perspectives around the globe.
And what we see is that the assumptions and decisions made by teams at every stage of the AI development and deployment lifecycle can introduce biases. For example, is Panasonic battery is better than LG or some small company brand in Zurich? And that is why this is such an important topic.
Energsoft platform trustworthiness is not something that we can just sort of delegate to one or two people and call it quits and move on. No! AI accountability is something that everybody must be thinking about actively with all stakeholders always to improve our process.
Transparency and intelligibility can help us achieve a diverse range of goals, so things from mitigating unfairness and machine learning systems help developers debug their AI systems to getting more trust from our users. There are two sides to transparency.
In part, transparency means that the people who create AI systems should be open about how and why they are using AI and open about the limitations of their operations. Transparency also means that people should be able to understand the behavior of AI systems. Prediction bias is what you often hear, referred to as interpretability or intelligibility. The choice of a training data set determines the behavior of a machine learning model.
So how can we bring more transparency to our data? Datasheets for datasets help data creators understand and uncover potential biases in their data that they may have missed or unintentional assumptions that they were making. And they help dataset consumers determine if a dataset is right for their needs. We have put together an initial set of questions that we think cover the critical information that a datasheet should include.
So right now, we do live in a society that is unfair and biased in many ways. And Energsoft thinks the whole point of focusing on fairness in AI systems is to make sure that the systems that we develop and we reduce unfairness in our society rather than keep things at the same level or even make it worse.
Reliability and safety are a concern for every AI system we develop. We need to make sure that the systems we are developing are consistent with the design ideas we have and working in a way that is compatible with our values and principles. And this requires that our systems, our models, are not creating harm in the world. And if there are situations where they may be making mistakes, we push products out there with quantified and well-understood risks and damages in a way that we share that with our users.
Our customer's trust depends on the guiding principle we have concerning reliability and safety, and it is a concept that applies to every AI product we have in the company. When we think about safety, the first examples that come to mind are self-driving cars. But it's not even limited to those physical systems, physical agents, and we worry about harm to human lives when a mission learning model is making predictions about people's health, in hospitals, you know when they are making predictions about the diagnosis.
Wrong systems can lead to harm for people, so those are the cases that we worry about because the threat is to human lives.
But this does not mean that this issue is only for those physical systems. Reliability is a big concern, and small mistakes may pile up when an order gets used many times across a large group of people, and that is why it is a concern for everything that we build.
Despite all the complexity, with these new models and with new technology that can be somewhat unpredictable and somewhat hard to interpret, we are still accountable for how our technology impacts the world. Second, Energsoft thinks about accountability as the structure that we put in place so that we make sure that we consistently are enacting our principles and that we're taking them into account in everything that we do.
Also, part of our accountability is to help our customers and partners be accountable. We have a set of principles that guide how we develop and how we sell and how we advocate for regulation on facial recognition. Because we feel like all three of those pieces are critical to be accountable.
We think it has a lot of great uses, and we also believe there is a lot of applications that could interfere with people's civil liberties or push society in a direction that we're not interested in supporting. So today, we have a set of guidelines that they can follow and think about these sorts of considerations at every step of the life cycle, and that's a first in this company. Energsoft believes in many other places outside this company as well.
Information in domains like battery post mortem analytics or selection of the battery supplier, finance credit for new battery storage that will support the grid. We also have over and underrepresentation. There is not a single definition that we can easily quantify and just integrate into our systems.
Privacy is a fundamental right, and Energsoft has a long-standing commitment to privacy and security in the systems and products that we build for our customers around the globe, including government national battery labs, corporations, or startups. With AI and machine learning, we add new complexity to those systems. And an increased reliance on using data to develop those systems, to train the networks.
That increasing reliability on data adds new requirements for keeping the network secure. For example, we are using more data, and we need to ensure the protection of that data, that it is not leaked or disclosed. One of the ways we approach that is not to remove the data from a customer's device or laboratory. To run the models locally on the device, eliminating that potential vulnerability.