The GATE Experimental Regulation and Policy Lab (ERPL) offers a sandbox environment and tools in an innovative package for interdisciplinary development and testing of technologies and prototyping and modelling of new legal, ethical and policy solutions to meet contemporary technical, regulatory and market concerns.

On one hand, the ERPL offers a sandbox environment for testing the feasibility, performance and innovation potential of new products, services and business models. On the other, it provides technology, legal, ethics and policy advice and training services to research organisations (eg, in R&D projects) and companies.

The added value of the ERPL can be seen in three directions:

  1. Contribute to evidence-based regulatory learning to increase the oversight capacity and understanding of the opportunities, emerging risks and the impacts of applications of big data and AI
  2. Offer services that are in high demand but short supply because they require interdisciplinary expertise that cannot be provided by one service provider alone (eg, a lawyer, an ethics officer, or a solutions architect)
  3. Bootstrap the national AI regulatory sandbox required under the European Union’s AI Act by providing a sandbox environment aligned with the policy priorities of national competent authorities and developed in collaboration with the supervisory authorities expected to be involved in setting up the AI regulatory sandbox

The ERPL seeks to become a trusted international partner with unique expertise and a regional hub for policy and regulatory testing, attracting stakeholders from Central and Eastern Europe and the Western Balkans region. Furthermore, in long term, EPRL strives to become the basis of a cross-border regulatory sandbox for the region, eg, in the domain of health data exchange.

Sandbox

What is the ERPL Sandbox?

The ERPL Sandbox offers a testing environment for technology solutions with a focus on big data and AI developed by companies and research organisations who either wish to work in collaboration with regulators, for example, through a cooperation agreement, or seek to receive guidance on the legal, regulatory and ethical implications of their product, service or business model.

The sandbox offers testing capacity with a feedback loop to the competent authority in cases where the latter is involved in the process. It also provides continuous learning and knowledge sharing for national competent authorities.

In these cases, the ERPL can set up ad hoc institutional cooperation agreements with supervisory authorities, eg, the Commission for Protection of Personal Data, whereby the authority would have access to results and insights from the testing activities conducted in the ERPL.

Who is it for?

  • Companies utilising AI and big data technologies in practical applications, services and business models
  • Companies that want to test innovative solutions with real users in a safe environment using state-of-the-art testing infrastructure
  • Companies that want to get access to vetted data sets and practical guidance for improvement and implementation of their solutions
  • Companies that are willing to provide evidence-based feedback regarding existing regulations and their improvement
  • Companies that want to meet customer expectations and balance them with the interests of society

Eligibility

The area of testing is technology-neutral, focusing on applications rather than specific technologies, including applications using AI, big data, digital twins, and simulations. The sandbox will cover diverse sectors such as digital health, future cities, intelligent government, and smart industry.

Targeted calls will be made for each area of testing, with no restrictions on territorial scope (the where), focusing only on the material scope (the what). Specific testing infrastructure of GATE will be discussed, including in the domain of AI, big data, digital twins, and simulations.

Entities that are both incorporated and unincorporated are welcome to participate.

Criteria

Key questions

Positive indicators

Negative indicators

Scope of the call

       Are you doing something that is within our areas
of testing?

       Will your customers be in the EU?

       Your innovation is intended for the EU market

       The relevant activity is regulated by AI and
data related legislation

       Your innovation doesn t appear to be intended
for use in the EU

Novelty of application

*Please note that we focus on
applications, not individual technologies

       Are you working on something innovative or notably unique? It
might be entirely new, a fresh market, or an updated version of an existing
concept.

       Desk research yields limited or no comparable instances of
innovation within the market.

       Your innovation represents a significant and noticeable increase
in scale.

       Many similar innovations exist, and your innovations seem to be
just minor changes to an established model.

Customer and social benefits

  • In what ways does your proposal benefit
    consumers, whether individuals or businesses?
  • How does it enhance current services?
  • What impact do you expect it to have on
    society?
  • What measures will you take to safeguard
    them from risks associated with your model?

       Your innovation is likely to result in a
better deal for consumers, such as reduced prices, improved quality, enhanced
security, and more.

       You have evaluated potential consumer/customer
and social risks and ways to mitigate them.

       Possible negative effects on consumers,
markets

       Enables bypassing regulatory requirements

Readiness

[TRL] can be lower in digital
sandbox environments

       Have you considered how your model aligns with applicable
technology, AI and data regulations? Have you conducted any preliminary
research into the regulations that could affect your business operations?

       Could you describe how your business will operate?

       Are you prepared to evaluate the innovation with actual consumers /
customers in a real-world market?

       What is the technology readiness level of your innovation and what
types of resources do you need to advance it?

       You ve clearly done your homework, understood your
responsibilities, and have a well-defined business plan.

       You have established a comprehensive testing strategy that
includes explicit goals, defined parameters, and success metrics.

       You have already conducted testing.

       You have the tools to experiment in the sandbox.

       You have adequate measures to protect consumers and can offer
suitable redress when necessary.

       Testing partners are either already identified or will be shortly.

       The technology readiness level of your innovation is anywhere
between 4-9.

       Should a test require authorisation, you are prepared to apply
shortly.

       You have shown minimal or no effort to comprehend the applicable
regulations.

       The testing objectives are unclear, and the testing plans are not
fully developed.

       You lack the resources needed for the test.

       The technology readiness level of your innovation is below 4.

       Your proposed customer protections are insufficient, and suitable
redress cannot be provided.

Company size

       Is there an actual necessity for you to
conduct tests in our sandbox given your company size?

       You lack other ways to test your innovation.

       The innovation does not align well with the
current regulatory framework.

       You would gain from using a sandbox tool to
conduct tests in a real-world setting.

       Going through the complete authorisation
process would be too expensive or challenging for a short and feasible test.

       You already have assigned an advisor.

       You possess substantial regulatory compliance
resources.

       Your business model aligns well with the
current regulatory framework, posing no especially challenging issues.

       Live testing is not required to answer your
question or attain your objective.

 

Evaluation guidelines (public)

Evaluation Process for Applications to the Experimental Regulation and Policy Lab

  1. Initial screening

Applications are initially reviewed to ensure they meet the basic eligibility criteria. This review includes verifying the applicant’s adherence to the thematic scope of the lab and the current call for applications and ensuring that the proposed project aligns with the lab’s objectives.

  1. Company size

The size of the applying company will be taken into consideration during the evaluation process. Projects submitted by smaller companies or startups might be assessed more leniently, especially within digital sandbox environments. This approach aims to encourage innovation and experimentation, providing these smaller entities with a fair opportunity to develop and test their ideas.

Larger companies will not be prohibited from applying per se; however, may be held to higher standards due to their greater access to resources and capabilities. The goal is to create a balanced field where innovation can thrive regardless of company size, while also ensuring that projects are feasible and aligned with the lab’s objectives.

  1. Technological Readiness Level (TRL)

Projects are assessed based on their technological maturity levels. While projects at lower Technology Readiness Levels (TRLs) are welcomed in digital sandbox environments to foster their development, they may be required to define a clear strategy and timeline and be able to demonstrate how they are planning to advance their solution to a higher TRL.

This approach ensures a balanced evaluation process where both nascent and mature technologies have the opportunity to be assessed fairly. Projects in the early stages are encouraged to explore and innovate, while projects with more developed technologies need to provide evidence of their readiness for implementation and potential for scalability.

The goal is to support a wide range of technological advancements, focusing particularly on groundbreaking new ideas that contribute to the lab’s objectives.

  1. Readiness

Readiness is a crucial criterion that evaluates the project’s preparedness for successful implementation. This evaluation involves a thorough examination of the project’s feasibility, which encompasses the practicality of the proposed methods and the likelihood of achieving the project’s objectives.

It also considers the resources required, such as funding, technology, and human capital, and assesses whether these resources are sufficiently available to support the project. Additionally, potential barriers to successful execution are identified and analysed to understand the risks and challenges that might hinder progress. By meticulously assessing these aspects, we ensure that only projects with a high potential for successful implementation advance to the next stage.

  1. Customer and social benefits

The evaluation process places a significant emphasis on the anticipated benefits to customers and society. Rather than focusing solely on technological advancements, projects are assessed based on their potential to create substantial positive social impact or enhance customer experiences.

Initiatives that demonstrate a clear promise of delivering meaningful improvements to societal well-being or significantly elevating the quality of customer interactions are given higher priority.

  1. Novelty of application

Novelty of application is a key factor in assessing projects. Those that demonstrate originality and introduce new concepts or significantly enhance existing frameworks are given preference. The evaluation process seeks to identify projects that push the boundaries of current knowledge and practice, known as the state of the art, bringing fresh perspectives and innovative solutions to the forefront.

By encouraging unique and ground-breaking ideas, the lab aims to foster an environment where innovation thrives, and novel applications can lead to substantial advancements in their respective fields.

  1. Alignment with the scope of the call

Projects must align with the specific scope of the call for applications. This alignment ensures that all proposed projects are relevant to the lab’s current focus and strategic goals. The maximum number of applicants per call is typically set between five and ten, and this parameter needs to be tested in real conditions. Having a maximum number of applicants allows us to manage our resources effectively and ensure dedicated attention to each project.

If the number of applications exceeds this maximum, we can use this criterion to sift through the applications and select the most relevant and promising ones. By prioritising areas based on our institute’s research and application domains, we ensure that our efforts are concentrated on projects that have the highest potential for impact and alignment with our objectives.

This structured approach to application management helps maintain a high standard of quality and relevance, fostering an environment where the most innovative and impactful projects can thrive.

  1. Positive indicators

Projects that exhibit potential for scalability are highly valued. Furthermore, applications that present clear and robust business models, demonstrate strong partnerships, or have endorsements from relevant stakeholders will be considered more favourably. These indicators serve as positive signs that the project is not only innovative but also viable and capable of making a significant impact in the real world.

  1. Negative indicators

Projects that lack clear objectives are often rated less favourably, as it is essential for a project to have a well-defined purpose and direction to be considered viable. Additionally, if a project raises unresolved ethical concerns, it may also be viewed negatively.

Ethical considerations are paramount in ensuring that any innovations brought forward do not cause harm or controversy. Furthermore, projects that demonstrate minimal potential for real-world application are likely to be rated less favourably.

It is crucial for projects to show that they can be implemented effectively and have a tangible impact in their respective fields. These negative indicators serve as critical points of assessment in the evaluation process, helping to ensure that only the most promising and responsible projects are selected for further development.

  1. Final decision

The final decision will be made by a panel of experts, comprising a committee of GATE research staff and, on an ad hoc basis, industry representatives from various sectors relevant to the project’s focus. This panel will conduct a thorough review, meticulously analysing all the criteria to ensure a fair and comprehensive evaluation. Each decision will be grounded in a detailed assessment, and feedback will be provided to all applicants to help them understand the strengths and weaknesses of their submissions.

This thorough evaluation process ensures that only the most promising and impactful projects are selected for incubation within the Experimental Regulation and Policy Lab, fostering an environment conducive to regulatory innovation and progress.

Assessment of testing needs

Informal interviews with companies

To ensure a comprehensive understanding of each project’s potential, informal interviews will be conducted with the companies involved. These interviews will be held under Non-Disclosure Agreements (NDAs) to protect sensitive information and foster open communication.

What we can offer based on initial evaluation

Following the initial evaluation of the use case, we will identify and outline the specific services and support we can offer to each project. This may include technical assistance, consultation on business models, and connections to potential partners or investors.

Terms and conditions (T&Cs)

The terms and conditions of our collaboration with the companies will be clearly defined and agreed upon. This will include compliance applicable AI, data and technology regulations and other relevant legal, ethical and technical standards.

Development of a testing plan

Together with each company, we will create a comprehensive testing plan. This plan will detail the goals, methodologies, and schedules for the testing phase, ensuring alignment and readiness across all parties involved.

Testing protocol

We will respond to questions that are identified during the assessment of testing needs. Depending on the agreed parameters, we will perform technical services that may include data set enrichment, data visualisation, building of data models and digital twins or simulations.

Exit report and feedback

Following the testing activities, we will compile an exit report and gather feedback to determine whether an aggregated summary should be made publicly available. This summary might cover particular problems and best practices that were identified during the testing phase.

We’re also considering a future project to set up a Digital Sandbox. This service will not focus on regulatory aspects, but rather on the provision of services such as certified data sets.

Services

In addition to the sandbox, the ERPL offers a comprehensive suite of services tailored to meet the diverse needs of various stakeholders, including partners in research projects, companies, policymakers, and legislators.

These services encompass law and ethics research, providing insights to ensure compliance with emerging regulations and ethical standards.

For companies, ERPL facilitates infrastructure matchmaking, helping them find the right technological partners and resources.

Policy makers and legislators benefit from ERPL’s policy advice and policy prototyping services, which assist in the development and testing of new regulations in a controlled environment.

Law and ethics research in projects

We are a trusted partner with interdisciplinary expertise embedding insights from science, technology, law and ethics into the design and implementation of  solutions developed in research and innovation actions funded by the European Commission and other international and national funding organisations.

Our offering extends to providing comprehensive legal and ethics steering, focusing particularly on data protection law, data governance, data spaces and data contracts, intellectual property law and management, and ethics steering.

We support partners in projects with evidence-based insights into improving compliance with the applicable national and international regulations in the field of AI and big data. We offer guidance on data protection laws to help manage and safeguard personal data in accordance with GDPR and other relevant legislation. Our data governance services aid in the structured and secure handling of data, ensuring that data spaces and contractual agreements are managed efficiently and responsibly.

Furthermore, we provide expertise in intellectual property law and management, helping our partners protect and leverage their intellectual assets effectively. Our ethics steering ensures that research and innovation projects adhere to high ethical standards, promoting personal autonomy, integrity, responsibility, transparency and accountability throughout the project lifecycle.

Signposting

We can assist in identifying existing regulations and guidelines that might be applicable to your company and its proposed business model. While this guidance will not be customized specifically for your firm, many organisations find it beneficial to be directed to pertinent information given the extensive amount of regulatory advice available.

Additionally, we provide support in understanding how these rules might impact your operations. Our goal is to simplify the regulatory landscape, making it easier for you to navigate and implement necessary compliance measures.

Moreover, we can offer continuous updates on regulatory changes, ensuring that your company remains compliant with the latest standards. This proactive approach can help you avoid potential pitfalls and stay ahead in your industry.

Testbed

The ERPL testbed focuses on testing and experimenting with technologies during research and development. One example is our legal test bed which is a digital experimentation environment allowing stakeholders in research and innovation projects to prototype policy and legal solutions to problems emerging from their pilots and use cases using methods such as policy prototyping, experimental policy design, and design jams.

Testbeds provide controlled environments for systematic experiments, often without a regulator. Testbeds are popular in various industries for testing new technologies and products, favoured by researchers, developers, and engineers.

Infrastructure matchmaking

The service consists of matching ERPL participants with appropriate infrastructure for testing. This service is addressed to SMEs and startups and it aims to help match participants in the ERPL sandbox with appropriate infrastructure for testing, such as high-performance computing infrastructure like DISCOVERER, the resources of the sectorial Testing and Experimentation Facilities in the domains of agrifood, healthcare, manufacturing and smart cities, as well as other public and/or private testing facilities and infrastructure in Bulgaria and across the EU.

Informal guidance

Informal guidance is a tool we use to help firms understand the potential regulatory implications of their innovative products or business models on an ad hoc basis. This service aims to provide preliminary insights into specific regulatory issues, helping businesses navigate the early stages of development with more clarity.

We offer informal guidance on distinct regulatory matters and the potential impacts of an innovative concept, notably related to regulations applicable to AI and big data applications. This guidance assists in identifying possible challenges and opportunities that might arise as the product or service progresses through its development cycle.

It is important to note that while informal guidance can be incredibly beneficial, it is followed at the recipient’s own risk. This means it does not carry the same legal weight as formal legal advice and should be used as just one of many tools.

Knowledge hub

ERPL also acts as a knowledge hub, offering an array of resources designed to enhance stakeholders’ understanding and capabilities in navigating the complexities of AI and big data technologies. This knowledge hub includes a diverse range of training programs tailored to various skill levels, ensuring that both novices and seasoned professionals can find value.

Additionally, ERPL provides informal guidance and awareness raising through workshops, seminars, and webinars, which foster a collaborative environment for knowledge exchange. These sessions often feature industry experts and thought leaders who share their experiences and best practices, helping participants stay abreast of the latest developments and trends.

By incorporating policy advice and policy prototyping into its offerings, the knowledge hub also supports stakeholders in understanding the regulatory implications of their innovations. This approach not only helps in identifying potential challenges but also in uncovering new opportunities as products and services progress through their development cycles.

Training

We provide customized interdisciplinary training on a wide array of pertinent subjects, including artificial intelligence, intellectual property, data protection, data management, and AI governance.

Our bespoke training offers are designed to meet the unique needs of diverse stakeholders, ensuring that each participant gains the necessary knowledge and skills to effectively navigate these complex domains.

Our training sessions are meticulously crafted to offer both theoretical insights and practical applications, fostering a comprehensive understanding of the regulatory landscape and its implications on innovative technologies. We provide a hands-on experience through interactive sessions that involves prototyping solutions to common problem or challenges with the implementation of legal and regulatory frameworks or building compliance artifacts (eg, policies or checklists).

Policy advice

Our policy advice service is designed to provide policymakers with expert and evidence-based insights on the practical implications of legal and regulatory frameworks affecting innovative AI and big data technologies. Through a combination of expert consultations, analyses, and recommendations, we assist public organisations in aligning their strategies with current and emerging regulatory requirements.

Key features of our policy advice service include:

  • Public consultations: Engage with our team of experts in one-on-one or public consultations to discuss specific challenges and opportunities related to current and future public policies.
  • Regulatory impact assessments: Understand the potential implications of existing and forthcoming regulations on innovation through our policy prototyping and sandbox testing activities.
  • Workshops and training: Participate in workshops and training sessions led by industry experts and thought leaders and designed to deepen your understanding of the practical implications of digital policies and regulations.

Policy prototyping

Policy prototyping is a design approach used to test and improve the effectiveness of a proposed policy. It involves creating low-resource, quickly deployed versions of policies to evaluate their potential impacts and feasibility before full implementation. This method allows you to learn about the strengths and weaknesses of an idea, identify new directions, and make adjustments in a low-risk environment. Our approach is based on use cases collected from our experience working with participants in the sandbox and the testbed.

Team

  • Prof Dr Sylvia Ilieva
  • Ivo Emanuilov
  • Boyan Dafov
  • Lorra Georgieva
  • Katerina Yordanova