Provider
dict | AI_Product_Solution
dict | AI_Standalone_Solution
dict | Industry
dict | AI_System_Details
dict | AI_System_Data
dict | Natural_Person_Interaction
dict | Audit_Log
dict | Generative_AI_Details
dict | Human_Oversight
dict | Bias_Mitigation
dict | Decommissioning_Protocol
dict | Emergency_Procedures
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
{
"Name": "ACME",
"Authorized_Representative": "If applicable: name with address",
"Description": "ACME activity",
"URL": "ACME website",
"Address": "ACME location",
"Country": "ACME country",
"Main_Contact": "Main company name and email contact, executive level, for AI topics",
"DPO": "Data Protection Officer contact, or similar for non-EU countries",
"Applicable_Authorities": {
"Local": "Any national or state-level authority related to AI regulation (e.g. AESIA and AEPD in Spain)",
"Others": "Any continental or international applicable authority"
}
} | {
"Name": "name of the solution or product that leverages the AI system and where the AI system is embedded",
"Description": "detailed description",
"Intended_purpose": "detailed description of the use for which the AI system is intended by the provider, See definition of intended purpose in Article 3 (12)",
"Focused_Market": "B2C or B2B or internal use, Applicable to both internal and external apps",
"Date_Market": "Date the Product with the embedded AI System is placed on the market or put into service”, # Relevant for the 10-year documentation retention period",
"Market_MemberStates": "Name of each Union Member State where the AI System is being placed on the market or put into service ... to measure the risk weather there could the case of widespread infringement or widespread infringement with a Union dimension",
"AI_Role": "Specific details of how AI improves the system",
"Transparency_Tools": "Specific details of where the AI transparency is made available, such as websites, QR codes for physical devices, technical changelog",
"GenerativeAI_Flag": "If applicable: YES or NOT (The AI System includes any Generative AI piece)... If it includes Generative AI components, pay attention to Generative_AI_Details"
} | {
"Name": "name of the AI standalone solution",
"Description": "detailed description",
"Intended_purpose": "detailed description of the use for which the AI system is intended by the provider",
"Focused_Market": "B2C or B2B or internal use",
"Date_Market": "Date the Product with the embedded AI System is placed on the market or put into service",
"Market_MemberStates": "Name of each Union Member State where the AI System is being placed on the market or put into service ... to measure the risk weather there could the case of widespread infringement or widespread infringement with a Union dimension",
"AI_Role": "Specific details of how AI improves the system",
"Transparency_Tools": "Specific details of where the AI transparency is made available, such as websites, QR codes for physical devices, technical changelog",
"GenerativeAI_Flag": "If applicable: YES or NOT ... The AI System includes any Generative AI piece"
} | {
"Providers_Industry": "name of the provider’s industry or area of work",
"AI_Solution_Industry": "name of the intended industry where the AI system is placed on the market",
"High_Risk": "to flag if it is part of high risk industries (e.g. toys)",
"Other_Regulations": {
"Industry_Regulation": "Applicable Industry / Vertical regulation",
"Data_Regulation": "Applicable Data Privacy regulation",
"Others": "Other international regulations"
},
"AI_Role": "Specific details of how AI improves the system"
} | {
"Version": "Version number or code",
"Capabilities": "List of the AI's capabilities",
"Limitations": "List of the AI's limitations",
"Intended_purpose": "detailed description of the intended use of the AI system",
"Foreseeable_misuseForbidden_use": "detailed description of the forbidden use(s) of the AI system.. Reasonably foreseeable misuse vs intended purpose, defined in Article 3 (13) mentioned in Article 9 (4) (b) + Article 13 (3) (b) (iv)",
"Substantial_Modification": "If applicable: Any Modification or a series of modifications of the AI System after its placing on the market or putting into service”, # Relevant for Article 12 (2) (b), Article 28 (1) (b), ML lifecycle, etc.",
"Techniques_Used": "List or description of algorithms or techniques used"
} | {
"Datasets": [
{
"Data_Type": {
"Image_Data": "YES or NO",
"Text_Data": "YES or NO",
"Tabular_Data": "YES or NO",
"Voice_Data": "YES or NO"
},
"Data_Volume": {
"< 1 Million rows": "YES or NO",
"< 10 Million rows": "YES or NO",
"> 10 Million rows": "YES or NO"
},
"Data_Velocity": {
"Static": "YES or NO",
"Batch": "YES or NO",
"Stream": "YES or NO"
},
"Datasets_Used": {
"Training_Dataset": "Description or link to training data",
"Validation_Dataset": "Description or link to validation data",
"Testing_Dataset": "Description or link to testing data"
},
"Training_Dataset_Ownership": {
"Owned/Firstparty_Data": "YES or NO",
"Thirdparty_Data_Proprietary": "YES or NO",
"Thirdparty_Data_Opensource": "YES or NO"
},
"Validation_Dataset_Ownership": {
"Owned/Firstparty_Data": "YES or NO",
"Thirdparty_Data_Proprietary": "YES or NO",
"Thirdparty_Data_Opensource": "YES or NO"
},
"Testing_Dataset_Ownership": {
"Owned/Firstparty_Data": "YES or NO",
"Thirdparty_Data_Proprietary": "YES or NO",
"Thirdparty_Data_Opensource": "YES or NO"
}
}
]
} | {
"Interaction_Type": "e.g. Chatbot, Image Recognition, etc.",
"Affected_Persons": "Any natural person or group of persons who are subject to or otherwise affected by the AI System”, # See for instance Article 3 (8a), Article 10 (3), Article 13 (3) (b) (iv)",
"Desired_Input_Data": "Data provided to or directly acquired by the AI System on the basis of which the system produces an output during deployment”; # See legal definition Article 3 (32)",
"Input_Data_DataPrivacy": {
"Non-Personal_Data": "YES or NO",
"Personal_Data": "YES or NO... Some AI Systems may combine both personal and non-personal data from different sources",
"Special_Category_Personal_Data": "YES or NO e.g. gender, religion, ... see GDPR, different level of consent. More stringent wrt security and processing https://gdpr-info.eu/art-9-gdpr"
},
"User_Notification": "How users are notified they're interacting with an AI System ... Exception if interaction with an AI System is obvious",
"Information_Provided_To_Users": "What information users are given about the AI System (stricter rules for emotion recognition or biometric categorisation systems)"
} | {
"Log_Entries": [
{
"Date_Time": "Timestamp of the interaction",
"Input": "Description or snapshot of input received",
"Output": "Description or snapshot of output produced"
}
]
} | {
"Name": "Name of the Generative AI System",
"Version": "Version number or code",
"Purpose": "Description of what the Generative AI is intended to do",
"Capabilities": "List of the AI's capabilities",
"Limitations": "List of the AI's limitations",
"Intended_purpose": "detailed description of the intended use of the AI system",
"Foreseeable_misuseForbidden_use": "detailed description of the forbidden use(s) of the AI system... Reasonably foreseeable misuse defined in Article 3 (13) mentioned in Article 9 (4) (b) + Article 13 (3) (b) (iv)",
"Substantial_Modification": "If applicable: Any Modification or a series of modifications of the AI System after its placing on the market or putting into service",
"Watermarks": "List of the watermarks for AI-generated content",
"Countermeasures": "List of the countermeasures to prevent illegal content",
"Datasets_Used": {
"Owned": "Description or link to training data",
"Others_free": "Description or link to validation data",
"Others_copyrighted": "Description or link to testing data"
},
"Techniques_Used": "List or description of algorithms or techniques used"
} | {
"Oversight_Type": "e.g. Periodic review, real-time monitoring, etc.",
"Intervention_Mechanism": "How a human can intervene or override the AI"
} | {
"Mitigation_Strategies_Used": "List or description of strategies used",
"Bias_Testing": "Description or results of any bias testing done"
} | {
"Procedure": "Steps to decommission the AI system",
"Notification_Mechanism": "How stakeholders are informed"
} | {
"Procedure_Type": "e.g. For handling deep fakes, misinformation, etc.",
"Response_Mechanism": "Steps or mechanisms for rapid response"
} |
AISBOM - AI Software Bill of Materials
JSON Spec for Transparency Obligations of the EU AI Act, including LLM / foundation models
Version 0.1 (December 11, 2023)
- This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52).
- The file is an illustrative example as the basis for discussion and feedback.
- To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).
- The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.
Call to action
- Please share your feedback in Hugging Face Discussions.
- See the call for contributions at the end of this document.
How to cite this work
@AdrianGonzalezSanchez (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). Hugging Face
Overview
EU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties
BOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.
Purpose of the AI-SBOM Transparency
Collecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:
- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).
- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.
- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.
- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.
Target group of the AI-BOM
AI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.
What is the scope of Article 13 AI Act? [EU Parlaments Proposal]
Article 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:
Article 13 (1): The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.
Article 13 (2) Requires that the high-risk AI System shall be accompanied by instructions for use** [Like a “**(Digital) User Manual**”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.
Article 13 (3) Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.
Thus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.
What is the scope of Article 52 AI Act? [EU Parlaments Proposal]
Article 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways:
(i) Article 52 (1): If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“Interaction Transparency”].
(ii) Article 52 (2): If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“Consent Transparency”].
(iii) Article 52 (3): If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“Content Transparency”].
Notably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition.
Contributing
This draft is understood as a “living paper” mapping the state of an ongoing discussion and open for feedback. We invite all stakeholders to share their insights and suggestions to enhance the tool's effectiveness and compliance capabilities. Please consider our notes for feedback and discussion.
Note #1: This AI-SBOM Transparency is for discussion purposes and does not constitute legal advice. It is essential to consult with legal experts to ensure full compliance with the AI Act.
Note #2: We mainly worked with the proposal of the EU Parlament. The final text of the AI Act is still unknown. Also, any standards for Article 13 and Article 52 are under development and not published at the moment. The AI-SBOM is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.
Note #3: Recognizing the variety of stakeholders involved in the AI lifecycle, each possessing varying degrees of technical know-how, we understand that transparency is not a one-size-fits-all attribute. AI systems should offer tailored transparency across the AI value chain, catering to the unique needs and perspectives of each stakeholder. This calls for a collaborative effort among all parties involved to ensure effective transparency."
Note #4: Please be aware that transparency has an intense tension (especially proprietary AI Systems) with Data Privacy (access/description to training data) and IP/trade secrets (access/description to the model) and Cyber Security (access/description to training data + the model vulnerabilities) - [altogether “Sensitive Information”]
- Downloads last month
- 44