Ethical guidelines for artificial intelligence (AI) drawn up by Varma’s specialists steer the use of AI at Varma

Varma employees are keen to make use of the possibilities of AI technology, but its rapid development also entails ethical challenges. These must be identified and assessed. Varma’s ethical guidelines for AI were drawn up by a working group made up of Varma specialists. They sum up the goals, procedures and limits required by the ethically sustainable use of AI.

Last autumn, Varma started to create guidelines for the ethically sustainable use of AI. The need for the guidelines arose from the rapid development of AI technology and situations for which the earlier guidelines did not provide sufficient answers or tools. The new guidelines will be integrated into the daily work of Varma employees, and members of the workshops cascade competence in ethical thinking to the rest of Varma.

“Ethical guidelines for AI are currently being processed by a number of companies in Finland. We all seek benefits and improved efficiency in our work through the use of AI, but at the same time we have the responsibility to use AI ethically. At Varma, this is highlighted especially in our function which develops AI systems, procures them from third-party suppliers and decides whether to adopt them. We must have both competence and efficient tools for assessing the risks and the related ethical aspects,” says Varma’s Mika Nikkola, Director, Digital and Data Service.

Towards ethically sustainable daily AI use

Varma’s ethical guidelines for AI were created in workshops, in which Varma specialists from different parts of the organisation received basic information on ethics and identified current and potential future applications of AI. They also discussed what kind of ethical aspects should be considered for these applications. The thinking work of the working group was led by ethicist Anna Seppänen, Doctor of Theology, who already had previous experience in creating ethical guidelines for companies. She was impressed by the dedication and active approach of the Varma employees in the workshop throughout the project.

“I saw the Varma employees’ exceptional commitment and focus on the ethical discussions. The responsible use of AI requires co-operation across professions so I was happy to see participants from different functions, such as IT, sustainability, HR, communications and legal affairs. I haven’t seen this kind of commitment before in companies, although the importance of co-operation is recognised,” Seppänen says.

AI technology has rapidly made its way into working life, and the fast development has increased the need to develop ethical thinking.

“For many, the arrival of AI may be a cause of anxiety that comes with threats, such as the loss of jobs or changes in work. The ethics of AI also brings together two phenomena that are difficult in working life – artificial intelligence and ethics. That is why I see the provision of AI competence to personnel as an important act that promotes well-being at work, since it prevents this ethical stress. Ethical guidelines for AI may serve as a means to improve competence instead of just as instructions,” Seppänen adds.

It is a good idea to have common guidelines for AI-related ethics, but it is only the ability to place them in their own context in daily life that makes them meaningful.

“The tension between guidelines and everyday application is part of applied ethics and AI ethics. However, there is no need to exaggerate the issue. If AI is used in a process or service, it does not have to be any more complicated than taking some time to assess the process using the guidelines to see what kind of ethical questions are involved. How can the questions be answered? Can you find solutions on your own or do you need external help? Don’t freeze in the face of this vast phenomenon. Spending just a little bit of time thinking about these issues is much better than nothing,” Seppänen underlines.

Anna Seppänen heads the workshop of Varma’s working group on ethnical guidelines for AI.

Varma wanted to invest in the size of the working group and the number of workshops to ensure that, after the process, all the participants introduce AI-related ethical thinking to their teams.

“I believe that many workshop participants appreciated the importance of this journey and will pass on what they learned. Together, we are now able to identify the ethical opportunities and risks related to AI, weigh them from different perspectives and adopt decisions in everyday activities to benefit Varma. It is essential to know under which conditions the use of AI can be considered ethically sustainable at Varma. When used appropriately, it helps us improve the efficiency and quality of our operations,” says Nikkola, who also participated in the workshops.

Varma’s ethical guidelines for AI were completed at the beginning of this year as a joint effort. Now, all Varma employees will join in, and the guidelines will be a key part of new employees’ induction material. By publishing the guidelines externally and reporting on the theme, Varma wants to communicate on its prudent and responsible technology development work.

“The workshops made it clear to me how important it is that Varma, as a major player in Finnish society, openly shares its practices related to AI. It builds confidence to know that these things have been given consideration and the ethical guidelines are a part of Varma’s processes,” says Seppänen.

Varma’s guidelines for the ethical use of artificial intelligence

Varma’s ethical guidelines for artificial intelligence (AI) outline what the ethically sustainable use of AI requires and what Varma employees are committed to. Varma’s Executive Group approved the guidelines on 16 February 2024. A more extensive version has been drawn up for Varma’s internal use, describing the guidelines’ application in everyday work in more detail.

Use of AI in the responsible execution of Varma’s core task

The ethically sustainable use of AI aims to

  • improve the efficiency and quality of Varma’s operations. Varma’s core task is to secure earnings-related pensions. Thus, quality and efficiency are also ethical obligations to us. We make use of AI systems when they help improve the efficiency and quality of our operations.
  • benefit our customers. We take the interests of our policyholders, insured and benefit recipients into account in all our operations. Our goal is for our AI systems to benefit our customers in the form of, for example, faster and more accurate customer service.
  • support Varma employees’ sustainable working life. At Varma, sustainability includes sustainable work culture. The use of AI systems aims to promote the good working life of Varma employees through, for example, smoother specialist role routines.

Limits on the use of AI to prevent ethical risks

Absolute requirements for the use of ethically sustainable AI at Varma are

  • the exclusion of forbidden and ethically unsustainable applications of AI. EU regulation of AI bans certain AI applications altogether. Compliance with regulations is something we never compromise on. Furthermore, we may ban certain AI applications on the basis of our own ethical consideration.
  • identifying the ethical risks of AI systems and abandoning the development and use of the systems, if necessary. Before the adoption of an AI system, we assess its ethical risks in relation to the targeted impacts. Even if the targeted benefits were substantial, we do not adopt AI systems that come with excessive risks. The use of an AI system may be terminated at any time on the basis of ethical consideration. Varma employees and external persons are also able to safely and anonymously raise concerns related to the use of AI through the whistleblowing channel for reporting ethical violations.
  • protecting the key rights of our customers, pensioners, partners and other groups. When it comes to the use of AI systems, we examine the key rights pertaining to Varma’s core task. We do not use an AI system if we identify it as a threat to the implementation of major rights and there are no sufficient means of preventing the threat.

Practices to ensure ethical sustainability in the use of AI

In the use of AI, we secure ethical sustainability

  • by having people be responsible and in control. AI systems are adaptable, able to learn and complex. In spite of this, the real responsibility and control must rest with people so that we can ensure that the use of AI promotes the goals set by people.
  • by ensuring sufficient transparency and comprehensibility. Transparency and comprehensibility cannot always be fully implemented in order to, for example, ensure the security of the system. We aim for meaningful and sufficient transparency and comprehensibility to allow third parties to assess the ethical sustainability of our solutions, if necessary.
  • by actively assessing the impacts of AI systems from many perspectives. When we identify that an AI system or application entails a non-negligible risk of unwanted impacts, we incorporate exceptionally careful ethical assessment into all the stages of the AI system’s development and use.


Varma’s ethical guidelines for AI

Read more about the principles that guide our operations at How we do things.

You might also be interested in these