References
This page contains some of the resources I've gathered over the past five years and used during my research for creating this library. Other resources are referenced directly in the cards themselves.
- Applying the ethics of AI: a systematic review of tools for developing and assessing AI-based systems | Artificial Intelligence Review
- All Tools
- Singularity card game
- AI Risk Categorization Decoded (AIR 2024
- AI risk atlas — Docs | IBM watsonx
- AI Risk Repository
- Taxonomy of Risks posed by Language Models
- 13 Reasons Why Heat Maps Must Die
- Cyber Risk Assessment: Moving Past the “Heat Map Trap” - The Protiviti View
- 4 Steps to a Smarter Risk Heat Map - Safe Security
- Building Guardrails in AI Systems with Threat Modeling
- Synthetic Content: Exploring the Risks, Technical Approaches, and Regulatory Responses - Future of Privacy Forum
- New Confused Pilot Attack Targets AI Systems with Data Poisoning - Infosecurity Magazine
- A guide to adopting AI features in your company - Work Life by Atlassian
- AI and ESG
- Risk Management Profile for AI and Human Rights - United States Department of State
- AI Risks that Could Lead to Catastrophe | CAIS
- EU model contractual AI clauses to pilot in procurements of AI | Public Buyers Community
- Moralis Machina
- openais-approach-to-external-red-teaming.pdf
- Adversarial robustness toolbox | IBM Research
- NVIDIA AI Red Team: An Introduction | NVIDIA Technical Blog
- Red Teaming AI Systems: The Path, the Prospect and the Perils | RSA Conference
- Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
- Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile
- 24-06-03_genai_orientations_en.pdf
- Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
- A statistical approach to model evaluations \ Anthropic
- Code of Ethics
- Google AI Blog: Introducing the Model Card Toolkit for Easier Model Transparency Reporting
- Algorithmic discrimination in Europe - Publications Office of the EU
- Risk in AI & Algorithmic Auditing - YouTube
- Quantitative_Privacy_Risk_Analysis
- FAIR Institute
- Privacy Impact Assessment - Canada
- NIST Risks Assessment Tools
- ISO 27557 Privacy Risk Management
- A (more) visual guide to the proposed EU Artificial Intelligence Act | by Nikita Lukianets | Apr, 2021 | Medium
- AI FactSheets 360
- Types of harm - Azure Application Architecture Guide | Microsoft Docs
- GitHub - Trusted-AI/adversarial-robustness-toolbox: Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- 50 Years of Test (Un)fairness: Lessons for Machine Learning
- Enabling access, erasure, and rectification rights in AI systems | ICO
- Berryville Institute of Machine Learning
- Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade | Pew Research Center
- 10 steps to educate your company on AI fairness | World Economic Forum
- Voortgang AI en algoritmen | Tweede Kamer der Staten-Generaal
- Machine learning compliance considerations
- Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers | Radiology: Artificial Intelligence
- Reproducible Deep Learning - Simone Scardapane
- Incorporate Ethics by Design Concepts Unit | Salesforce Trailhead
- Challenges and limits of an open source approach to Artificial Intelligence
- The Right to Process Data for Machine Learning Purposes in the EU
- GitHub - tensorflow/model-card-toolkit: a tool that leverages rich metadata and lineage information in MLMD to build a model card
- Providing Assurance and Scrutability on Shared Data and Machine Learning Models with Verifiable Credentials
- Artificial intelligence: the opinion of the CNIL and its counterparts on the future European regulation | CNIL
- “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST | Montreal AI Ethics Institute
- How to Force Our Machines to Play Fair | Quanta Magazine
- Blog: New toolkit launched to help organisations using AI to process personal data understand the associated risks and ways of complying with data protection law | ICO
- MFML Part 2 has arrived! - by Cassie Kozyrkov - Decision Intelligence
- Blog: Reflecting on the first year of the ‘Explaining decisions made with AI’ guidance | ICO
- Introducing Twitter’s first algorithmic bias bounty challenge
- News Release: DHS S&T Releases Artificial Intelligence & Machine Learning Strategic Plan | Homeland Security
- OpenAI's Codex model turns ordinary language into computer code - Axios
- Hogan Lovells responds to the European Commission’s consultation on AI
- Federal Register: Artificial Intelligence Risk Management Framework
- PCPD Publishes “Guidance on Ethical Development and Use of AI” Media Statement
- Framework of Meaningful Engagement
- The Launch Space - A roadmap to more sustainable AI systems - YouTube
- AI Ethics Living Dictionary | Montreal AI Ethics Institute
- Stasis in AI Ethics - YouTube
- Deep Neural Networks are Surprisingly Reversible: A Baseline for Zero-Shot Inversion
- A Beginner’s Guide for AI Ethics | Montreal AI Ethics Institute
- Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts
- AI Ethics Maturity Model | Montreal AI Ethics Institute
- Mapping value sensitive design onto AI for social good principles | Montreal AI Ethics Institute
- Six Essential Elements Of A Responsible AI Model
- Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary) | Montreal AI Ethics Institute
- The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
- Concrete Problems in AI Safety
- What Do We Need to Build More Sustainable AI Systems? | GSF
- Ethics-based auditing of automated decision-making systems: intervention points and policy implications | Montreal AI Ethics Institute
- AI Ethics: Enter the Dragon! | Montreal AI Ethics Institute
- Examining the Black Box: Tools for Assessing Algorithmic Systems (Research Summary) | Montreal AI Ethics Institute
- NIST Taxonomy of AI risks
- Putting AI ethics to work: are the tools fit for purpose? | Montreal AI Ethics Institute
- UK government publishes pioneering standard for algorithmic transparency - GOV.UK
- Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe | Montreal AI Ethics Institute
- Explaining the Principles to Practices Gap in AI | Montreal AI Ethics Institute
- UNESCO’s Recommendation on the Ethics of AI | Montreal AI Ethics Institute
- Leidraad kwaliteit AI in de zorg opgeleverd door en voor het veld | Nieuwsbericht | Data voor gezondheid
- Can a Model Be Differentially Private and Fair?
- Privacy and responsible AI
- Provisions on the Administration of Algorithm Recommendations for Internet Information Services
- RGPD compliance of processings that embed Artificial Intelligence: An introduction
- An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems - ScienceDirect
- Fairgen | Biased Data
- Enhancing Trust in AI Through Industry Self-Governance | Montreal AI Ethics Institute
- Advancing accountability in AI
- OECD AI Principles
- OECD, AI Language Models
- NIST AI Risk Framework
- IEEE Standards on Autonomous and Intelligent Systems
- Algoritmische Transparantie
- Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making - Future of Privacy Forum
- Governance Guidelines for Implementation of AI Principles
- AI: Decoded: China’s deepfake law — Synthetic data — Selling sensitive data for profit – POLITICO
- Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning
- The winter, the summer and the summer dream of artificial intelligence in law
- Representation and Imagination for Preventing AI Harms | Montreal AI Ethics Institute
- Google AI Blog: Federated Learning with Formal Differential Privacy Guarantees
- Maintaining fairness across distribution shift: do we have viable solutions for real-world applications? | Montreal AI Ethics Institute
- Explanability:Robustness and Usefulness in AI Explanation Methods | Montreal AI Ethics Institute
- The AI Carbon Footprint and Responsibilities of AI Scientists | Montreal AI Ethics Institute
- AI Risk Management Framework
- Ten-guidelines-for-product-leaders-to-implement-ai-responsibly
- One machine learning question every day - bnomial
- Children Rights Impact Assessment
- Adversarial-robustness-toolbox
- Github, adversarial-robustness-toolbox
- Threat Modeling AI/ML Systems and Dependencies
- Vulnerabilities of Connectionist AI Applications: Evaluation and Defense
- ENISA: Artificial Intelligence Cybersecurity Challenges
- NCSC AI Security
- WEF Artificial Intelligence for Children 2022
- We need redress by design for AI systems
- Privacy Preserving Machine Learning: Threats and Solutions
- Security and Privacy Issues in Deep Learning
- AI BLIND SPOT
- AI Liability Key Challenges
- AI Liability Considerations
- EU guidelines on ethics in artificial intelligence: Context and implementation
- Artificial Intelligence and Data Protection How the GDPR Regulates AI
- Does the Correspondence Bias Apply to Social Robots?: Dispositional and Situational Attributions of Human Versus Robot Behavior
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
- An EU Artificial Intelligence Act for Fundamental Rights: A Civil Society Statement
- Guiding Principles on Business and Human Rights
- Accountability Principles for Artificial Intelligence (AP4AI) in the Internal Security Domain
- Getting the future right: Artificial Intelligence and Fundamental Rights
- Operational Guidance on taking account of Fundamental Rights in Commission Impact Assessments
- Responsibility and AI
- AI for Healthcare Robotics
- Stride-ML Threat Model
- STRIDE-AI: An Approach to Identifying Vulnerabilities of Machine Learning Assets
- Securing Machine Learning Algorithms, ENISA
- Data Minimization for GDPR Compliance in Machine Learning Models
- Does Dimensionality curse effect some models more than others?
- Towards Breaking the Curse of Dimensionality for High-DimensionalPrivacy
- Differential Privacy Blog Series
- Hunting for Discriminatory Proxies in Linear Regression Models
- What-if Tool: Playing with AI Fairness
- ICO: Lawful basis for processing
- Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI
- EDPS Guidelines on assessing the proportionality of measures that limit the fundamental rights to privacy and to the protection of personal data
- Charter of Fundamental Rights of the European Union
- AI Fairness - Explanation of Disparate Impact Remover
- Mitigating Bias in AI/ML Models with Disparate Impact Analysis
- Certifying and removing disparate impact
- Avoiding Disparate Impact with Counterfactual Distributions
- Random Oversampling and Undersampling for Imbalanced Classification
- Requisitos para Auditorías de Tratamientos que incluyan IA
- Oversampling and Undersampling
- Explainable Artificial Intelligence (XAI)
- LIME
- Z-Inspection
- Why Should I Trust You? Explaining the Predictions of Any Classifier
- SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk
- IBM: Explainable AI
- Text Mining in Survey Data
- Automation Bias
- The Flaws of Policies Requiring Human Oversight of Government Algorithms
- Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción
- The False Comfort of Human Oversight as an Antidote to AI Harm
- A Proposal of Accessibility Guidelines for Human-Robot Interaction
- ISO/IEC 40500:2012 Information technology — W3C Web Content Accessibility Guidelines (WCAG) 2.0
- ISO/IEC GUIDE 71:2001 Guidelines for standards developers to address the needs of older persons and persons with disabilities
- ISO 9241-171:2008(en) Ergonomics of human-system interaction
- Mandate 376 Standards EU
- UNSDGs United Nations Sustainable Development goals
- Ethics guidelines for trustworthy AI
- Evolution in Age-Verification Applications
- Generalization in quantitative and qualitative research: Myths and strategies
- Generalizing statistical results to the entire population
- A Proposal for Identifying and Managing Bias in Artificial Intelligence
- Hidden dangers of ChatGPT
- The role of reciprocity in human-robot social influence
- Reciprocity in Human-Robot Interaction
- Social robots and the risks to reciprocity
- Value alignment
- AI Values and Alignment
- Online Ethics Canvas
- Synthetic Data – Anonymisation Groundhog Day
- Datasheets for Datasets
- OWASP API Security Project