The Biden Administration’s AI Bill of Rights is the latest pitch for greater data protections at the federal level as enterprises—and foreign states—adopt data-dependent machine learning systems to evolve their operations.
Announced in October 2022, this latest framework builds on the core principles for tech accountability and reform that the U.S. government has attempted to define in fits and starts for more than a decade. While recent legislation—including The Chips and Science Act—have funneled federal dollars into building cutting-edge technologies stateside, enforceable guidance on how to responsibly vet and manage new tech has not kept pace.
This absence of federal guidance has been especially glaring as data-driven solutions have skyrocketed in prominence within the enterprise space, promising more informed decision making and automated systems via artificial intelligence (AI) and machine learning (ML) principles. This new framework marries themes from previous, non-U.S. legislation around data privacy with guidance from innovators across the AI space, all through a lens of social justice and equity that many experts argue hasn’t been prioritized to date.

What is the Blueprint for an AI Bill of Rights?
This yet-unenforceable Blueprint is broken out into five principles that any organization developing or using artificial intelligence (AI) should adhere to:
- Safe and Effective Systems: Citizens shouldn’t be exposed to untested or poorly qualified AI systems that could have unsafe outcomes—whether to individuals personally, specific communities, or to the operations leveraging individual data.
- Algorithmic Discrimination Protections: Simply put, AI models can’t be designed with bias in mind, nor should systems be deployed that haven’t been vetted for potential discrimination.
- Data Privacy: Organizations mustn’t engage in abusive data practices, nor should the use of surveillance technologies go unchecked.
- Notice and Explanation: Individuals should always be informed when (and how) their data is being used and how it will affect outcomes.
- Human Alternatives, Consideration, and Fallback: Individuals should not only have authority to opt-out of data collection, but there should be a human practitioner they can turn to when concerns arise.
Why is an AI data framework important?
While the new guidelines are delivered under the banner of AI, they’re more akin to the broad-based consumer ‘Bill of Rights’ delivered by the European Union back in 2018 via the General Data Protection Regulation (GDPR). Anyone working in tech—or, realistically, the enterprise space—in the 2010s will be familiar with GDPR, as it placed an enforceable framework around data privacy that had been startlingly absent prior to the GDPR’s original iteration first pitched in 2016.
With billions in fines levied against Big Tech since GDPR became law in 2018, it remains among the only globally-significant legislative measures in place that gives individuals ownership of their personal information. (***Editors note: EU legislators are hard at work crafting their own AI-specific regulations that build on the principals already in action via GDPR.***)
Still, there remains to be any significant data protection laws on the federal level stateside, even though California’s CCPA and similar legislation in Illinois and other states offer a template federal legislators could follow. Instead, most companies collecting data stateside have to voluntarily adhere to ethical standards for data collection: In fact, most global enterprises simply apply the same protections and permissions to U.S. data that they do to data collected in the EU by default.
This isn’t to say there are no protections for personal identification information (PII) in the U.S., as HIPAA and state-level legislation helps ensure companies aren’t abusing their access to sensitive healthcare information. But things get tricky as the breadth and variety of data businesses are capable of collecting accelerate on a massive scale, alongside the number of applications for AI and ML driving modern enterprise transformation.
AI in the Enterprise
What makes it critical for there to be an AI-specific ‘Bill of Rights’ is that the scale and variety of data collected for many enterprise-grade AI applications far exceeds what one would comfortably call “general data.”
One clear example is the rise of computer vision in the enterprise space. Computer vision applications involve taking any form of visual data—image or video, generally—and creating deep learning models that can derive powerful business insights when crafted and managed thoughtfully. These can be used for monitoring an assembly line, for instance, with a computer vision model trained to detect mislabelled or damaged goods, or to track shelf stock levels in retail, with models trained to recognize inventory levels.
However, some of the most powerful computer vision use cases have come from tracking actual human beings. At the height of the pandemic, for instance, computer vision models were trained to recognize when workers weren’t wearing appropriate personal protective equipment (PPE), enabling compliance detection from a socially-safe distance.
The risks in these scenarios for both humans being tracked and the companies managing computer vision models are manifold.
For starters, visual data is unstructured, meaning that there is no inherent description or classifiable differentiators between one image or another that computer vision models can “learn” from. Instead, there is an explicit human-in-the-loop (HITL) element to designing computer vision models that calls for data managers to apply descriptions to an image (ie. an unmasked worker versus a masked one) to inform the results of a given model.
This process of data labeling (or annotation) is only the start of the HITL cycle, as managers need to maintain these models long-term to ensure they continue to perform as expected. Bad data—ie. Mislabeled imagery—could enter the model that informs inaccurate outcomes if data managers aren’t vetting inputs.
Another phenomenon—known as data bias—could also impact machine learning models of all types (computer vision or otherwise) if those tasked with managing these algorithms long-term aren’t applying their own ethical and responsible standards to data collection and vetting.
Aimed at limiting data bias (and that’s just the start)
Facial recognition technologies, for instance, present a clear-cut use case for what data bias in action could portend. Numerous studies have shown that many of the advancements in facial recognition have been influenced by racially-unbalanced data sets, which have shown to disadvantage minority groups when these applications are deployed for surveillance, policing or even to vet homeownership candidates.
These headline-grabbing use cases are a throughline across the U.S.’s Blueprint for an AI Bill of Rights, as the framework promotes not just more effective AI systems, but non-discriminatory AI first and foremost.
However, the language of the blueprint emphasizes that “the technical capabilities and specific definitions of [AI systems] change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools.”
This caveat is important to note, as it more or less acknowledges the inability of the federal government to keep pace with understanding the potential implications of the technologies that both fuel our economy and transform our society. On the one hand, AI developers have so far largely been able to develop and deploy new solutions without the red tape that might otherwise hinder innovation in the states.
But as we’re learning today with more and more AI solutions changing their mission (ie. IBM Watson) or switching gears entirely (ie. Zillow), taking a more considered approach to developing AI solutions that’s focused on responsible data management will hopefully improve the efficacy and safety of these solutions long term
