New Government AI Guidelines May Be Too Broad for Practical Use

,
New Government AI Guidelines May Be Too Broad for Practical Use

New Government AI Guidelines May Be Too Broad for Practical Use

In the last three years artificial intelligence has become a significant presence in our working lives. As these technologies have grown, so too have the calls for regulation and guidance on their use. In response, the Government of Ireland has issued advisory guidelines for public sector organisations deploying AI. These new guidelines provide an extensive framework for deploying AI, which is surely welcome. The Guidelines, however, run the risk of being too broad. There are so many provisions included, touching on areas from climate change to diversity and inclusion to design requirements to transparency rules, that it is hard to imagine any organisation could apply these completely.

It is worth framing at the outset that these Guidelines have been written to be optional for public sector organisations. They can also be considered provisional, as the speed with which these technologies are changing makes fixed rules irrelevant. What the Guidelines establish instead are a series of seven principles intended to guide use of AI. In deploying AI, public sector organisations are now asked to consider the following:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6. Societal and environmental well-being
  7. Accountability

Each of these principles outline an ideal to be aspired to when using AI. For instance, “Principle 4: Transparency,” commits public sector workers to being transparent with end-users about the AI systems in use. There must be clear documentation of AI model development, and the public must be informed whenever they are interacting with an AI system. In many cases, these principles tie into existing laws or initiatives. “Principle 3: Privacy and Data Governance”, for example, relates to GDPR compliance, while “Principle 5: Societal and Environmental Well-Being” is informed by the Climate Action Plan.

Translating these principles into action involves following a “Project Lifecycle” for AI projects. This lifecycle breaks down any AI project into five key stages: “Design, Data & Models”, “Verification”, “Deployment”, “Operation”, and “Retirement”. Some of these stages are further divided into subsections. For instance, the Design stage includes Planning & Design, Data Collection & Processing, and Model Building, with each of these subsections outlining specific actions to be followed to complete the stage. It is here we encounter the scale of the Guidelines. Each stage comes with a set of actions and, in each case, there are enough actions to fulfil all seven of the principles. This means there is a minimum of seven actions per stage and a rapid ballooning of potential actions.

As an example, in the “Planning & Design” phase an organisation might choose to “set up role-specific responsibilities for oversight”, “integrate data minimisation and security protocols into the AI design”, or “conduct social impact assessments to ensure AI systems contribute positively to Irish society”, among other actions. On their own, the actions have value. Taken together they can represent an enormous burden. “Planning & Design” contains twenty-six potential actions, and this is only one of three sub-stages for “Design, Data & Models”. Furthermore, the Guidelines also suggest two additional planning stages, outside of the Project Lifecycle. These are the Decision Framework and the AI Canvas. The Decision Framework consists of a list of questions to be considered at the outset of a project, while the AI Canvas provides a worksheet comprised of fifteen questions for the Project Lead to answer during the planning stage of an AI project. These sections are also optional, although the AI Canvas in particular is more compressed and user friendly than the ideas set out in the Lifecycle.

It is worth bearing in mind that at this moment debates are ongoing within the European Parliament as to whether GDPR should be rolled back. Tech regulation in Europe has suddenly become very uncertain. At the same time, the government’s pre-existing commitments, such as targets in ESG areas, cannot be easily dropped. These Guidelines speak to this contrast: they put everything on the table as a possible means of regulation, but do not commit to any one method. Working out what to do with the Guidelines will be a challenge itself.

Contributors
                                                    

Vincent Teo | Partner & Head of Public Sector & Government Services

Vincent Teo
Partner & Head of Public Sector & Government Services

Dr. Conor Dowling | Research & Policy Executive | Risk Consulting

Dr. Conor Dowling
Research & Policy Manager
Risk Consulting