Home

  • What is enterprise decentralization?

    What is enterprise decentralization?

    While the pandemic may have pushed enterprise decentralization into the mainstream, it’s a concept that many cloud-forward businesses have been working toward since well before 2020. Much more than just a means for productive social distancing, decentralization strategies are designed to make businesses more agile by default.

    In the broadest sense, enterprise decentralization allows IT organizations to shrug off the legacy systems and on-prem hardware that confined business to the office. Instead, decentralized organizations leverage a tech stack populated increasingly by cloud-based solutions that can enable worker productivity from virtually anywhere. This means fewer tools hosted in bulky corporate data centers, and perhaps most critically, cloud-based data storage and exchange. 

    There are of course many different flavors of enterprise decentralization, and limitations on an industry-by-industry basis for cloud migration generally. For starters, many businesses won’t ever be able to fully offload all of their corporate data to cloud environments, or take the risk of enabling remote access to certain kinds of data in the first place. That’s why many organizations take a hybrid approach to enterprise decentralization (which many modern workers are likely familiar with) where they leverage a mix of cloud and legacy on-prem solutions, ostensibly to ensure sensitive data remains secure. 

    However, even businesses that take a hybrid cloud approach are often following their own custom strategy. While the general idea of decentralization hinges on greater adoption of cloud, there is truly no one-size-fits-all strategy when it comes to migrating workflows and data to new environments—nor are the tradeoffs, benefits, or consequences the same for every organization. 

    laptop workstation with multiple screens of code demonstrating a busy workplace.

    Unique business considerations for enterprise decentralization

    In healthcare, for instance, HIPAA and other regulations put serious boundaries on the protections organizations must place on personal identification information (PII). At the same time, healthcare providers need to be sure that patients have quick and easy access to their medical data—and that’s not even taking into consideration global data protection and governance standards that make quick access to personal data an enforceable right in some jurisdictions. 

    Businesses in finance have their own wealth of considerations and potential pitfalls when it comes to decentralizing their operations too. For starters, it’s important not to conflate decentralization in finance for blockchain or crypto, which at their baseline are designed to separate investments from traditional, legacy money markets. But for traditional banking systems, the rigidity of legacy systems are part of the appeal, as most consumers simply require an accessible and secure storage partner for their assets and income. This calls for financial information to be stored in heavily privatized and secure environments out of necessity.

    But as consumer tastes change—with mobile banking and healthcare applications as two examples—workers increasingly favor digital workflows in kind. This is where the lines between enterprise decentralization, digital transformation and the various degrees of cloud migration and adoption begin to blur, as each of these concepts ultimately hinges on giving modern workers the “user experience” they increasingly covet. 

    All of this puts the onus on IT, and calls for a reframing on how enterprises view this area of the business in 2023 and going forward.

    Start with IT decentralization

    While enterprise decentralization is the result, IT decentralization is the means to that end. For businesses to successfully deliver greater agility across the business and the level of digital UX that modern workers crave, they need to move IT teams out of their traditional silos and instill digital literacy across the business. 

    Historically, IT was treated as the one-stop-shop for all enterprise tech. These teams were the gatekeepers when it came to accessing corporate data assets, using specific tools and even leveraging hardware—and usually within the four walls of the corporate office.

    Today, the corporate network is the new office, and rather than securing the corporate on-prem data center, IT and NOC teams need to focus on the extended, distributed enterprise network first and foremost when chartering their marching orders. In fact, it makes sense for many organizations that haven’t yet to fully separate their network operations teams from their traditional IT unit so that there are experts focused explicitly on ensuring network performance and security in a decentralized setting. 

    Simply put, network management requires undivided expertise and management to ensure enterprise success, and splitting this focus by putting an undue IT burden on network-focused teams is too risky.

    At the same time, greater technological literacy needs to be mandated across the organization. With low-code and no-code solutions gaining prominence across marketing, sales, HR and virtually all other traditional corporate environments, traditional IT leadership—whether it’s the CIO, CTO or even engineering leads—need to create some kind of partnership resources for tech-savvy individuals on non-IT teams. 

    According to a recent survey from Zoho, 99 percent of IT leaders polled agree that marketing (52 percent), finance (45 percent), and sales (43 percent) teams need more technical training to launch low-code/no-code products. It may make sense that going forward, separate IT professionals will be staffed within each one of these corporate departments as opposed to centralized within a single IT department that encompasses network management (ie. the legacy IT model). 

    This of course is easier said than done, as dismantling and decentralizing a core unit of the business will inevitably come with objections. For instance, without a clear order of operations when it comes to approving new tools company-wide, businesses run the risk of Shadow IT with all of its potentially hazardous repercussions. There’s also the risk that teams pay for redundant licenses for certain products being used across teams, or simply an oversaturation of applications that ultimately hinder productivity rather than nurture it. 

    The biggest consideration for enterprise leadership to bear in mind when navigating decentralization is ensuring that technical expertise is instilled in all areas of the business. At the same time, decentralization cannot be conflated for siloing, as department-specific technical leads must still work in concert with their peers across the organization to ensure redundancies are avoided and standards are adhered to across the business. 

  • What the latest Big Tech layoffs mean for the future of work

    What the latest Big Tech layoffs mean for the future of work

    There’s little hay to be made of the recent round of Big Tech layoffs. Tens of thousands of talented staff were almost simultaneously let go across the FAANG workforce and beyond (see Twitter), with Apple being among the only domestic tech giants that hasn’t frozen recruiting efforts as 2022 comes to a wrap.

    Even still, Apple is taking a much more cautious approach to hiring today than it has in years past, while Meta and Amazon have shaved off—read: forcefully amputated—double-digit percentages from their workforce. 

    It’s a major reversal of the “job seekers market” that characterized the start of the Covid-19 pandemic, when work-from-anywhere broadened access to top talent and jobs while creating a boom market for tech solutions that helped enable remote productivity. 

    It’s also a major reversal for the long-term outlook of Big Tech. It might not be far-fetched to view these latest layoffs as akin to the “dotcom bubble” that burst at the turn of the Millenium. 

    In that scenario, seemingly endless funds were funneled into nascent internet-born companies that staffed up in an arms race for talent. In fast order, the buzziest of those brands endured mass-layoffs and a slew of mergers/acquisitions before viable long-term “digital economy” businesses actually emerged. 

    That’s not to say those experiencing these latest layoffs should look back to 2000 for an idea of what’s next for their former or future employers. While many corporate leaders are characterizing the layoffs as a “market correction” after following misguided forecasts during the pandemic, a look at the titles and departments targeted by the layoffs paints a different picture. 

    Professional worker shaking hands at desk in office demonstrating agreement or greeting in the context of a job interview or hiring process

    Layoffs target humans-in-the-loop of ML and AI

    For starters, engineering talent of all kinds were the first on the chopping block at Meta (nee Facebook), Twitter, Salesforce and even former unicorn startups Stripe and Lyft. These are the literal platform builders that not only helped shape the user interfaces we’re all familiar with, but helped make these spaces safe and enjoyable.

    At Twitter, for instance, the company’s new leadership was almost proud to give pink slips to the ML Ethics, Transparency, and Accountability team led by Rumman Chowdhury, someone widely considered a rockstar in the ML space. While there is a LOT to unpack about Twitter’s current and future state, it’s worth noting that Meta similarly downsized their Probability unit, which itself focused on developing Meta’s ML infrastructure. 

    While Meta’s 50-person Probability unit was less focused on ethics as an explicit marching order akin to the ML team Twitter offloaded, Probability literally represented the human-in-the-loop element required for steering successful ML and AI. Without delivering the data governance and oversight necessary to ensure ML and AI applications perform as expected—including accounting for data bias that’s been shown to deliver negative AI/ML outcomes on Facebook specifically—many of Meta’s ML-powered initiatives are essentially “steering blind.”

    These layoffs emphasize how much humans literally sit at the center of technologies considered Artificial Intelligence. While the name implies “computers supplanting humans,” these systems run on models that require heavy human curation that’s anything but one-and-done. Rather, entire teams (ahem, Meta’s Probability unit) need to be tasked with handling ML infrastructure on a global scale—and the stakes of poor management couldn’t be higher

    It’s especially noteworthy to emphasize too just how in-demand AI and ML talent seemingly remain: Look no further than any non-FAANG tech company’s career’s page for proof. So what should be the big takeaway behind all of these layoffs—beyond, in Twitter’s case, willful trolling?

    Levering ML and AI talent for real-world solutions

    A recent report from Protocol emphasized that while social media giants may be less keen to keep the talent they hoarded in boom times, Green Tech companies are more than happy to step in and leverage talent to build world-changing solutions. Similarly, continue to funnel money into tech startups fueled by ML and AI techniques that require the kind of experience dealing with unstructured data that former Meta and Twitter engineers have in spades.

    So the real lesson here may be less an indictment on AI and ML and more of a repositioning—if not “market correction”—on the efficacy of social media. 

    Both Meta and Twitter face an uncertain future because they’ve failed to make their platforms more enjoyable for users, while reporting increasingly disappointing ad revenues that underline their flailing business cases. It’s easy to pin this on TikTok and shifting consumer tastes, but it’s also a failure of these businesses to put their AI and ML practices to effective use in delivering pleasing outcomes for new consumer appetites. 

    This paints a rosier picture for those recently laid off by Big Tech compared to those who were left jobless following the dotcom bubble burst. That’s because the skills these engineers have honed managing and curating unstructured data for a social media giant are widely applicable in the current job market. 

    The lesson to bear in mind going forward is that social media may simply be evolving into something very different than what it was when Facebook and Twitter were at their peak valuation. On the flip side of that is that the market for tangible solutions addressing the myriad real-world struggles that social media has laid bare over the last decade is growing tangentially to social media’s descent. 

    There are unfortunately bound to be similar headlines around layoffs and “bad quarters” across Big Tech going forward, but it’s still likely safe to characterize the current state of the workforce as reshuffling—at least for now.

  • Low-code and No-code: Why Citizen Developers aren’t Shadow IT

    Low-code and No-code: Why Citizen Developers aren’t Shadow IT

    Low-code and no-code solutions can be a double-edged sword for the enterprise, offering a faster path to automation while empowering individuals outside the traditional walls of IT to take innovation into their own hands. These tools represent a new (albeit familiar) strategy for third-party involvement at the corporate level, helping to drive digital transformation in all areas of the business. But they also invite a mix of experts (and non-experts) into the once closely-guarded fray of custom application development, which should give any security-minded members of the C-Suite pause.

    This democratization of development has the potential to be a boon, lowering the barrier to entry for forward-thinking citizen developers to start creating their own point-specific solutions. After all, the scope of traditional corporate IT encompasses the whole organization, and a chronic lack of developer talent spreads these teams thin in driving innovation equally across the business. With cost-effective, low-code and no-code services, enterprises can instead offload the coding aspect of application development–if not supplant it entirely–while enabling non-coders to leverage their specific subject-matter expertise for new, innovative applications. 

    But as any CISO can attest, the risk often outweighs the benefit when non-IT stakeholders are allowed heavy influence over the enterprise tech stack. Shadow IT hasn’t gone anywhere, despite the term losing a bit of buzzworthiness since peaking as a hot C-Suite topic at the start of the pandemic. Workers may have started returning back to the office, but legacy, on-premises solutions aren’t being re-adopted–nor have the cloud-delivered solutions that enabled flexible, work-from-anywhere being forsaken. 

    While it’s not fair to conflate low-code and no-code tools for Shadow IT, the concerns of the ladder haven’t dissipated as the benefits of the former have, arguably, been put on a pedestal. It’s not altogether surprising, either, as the same factors that drove a major shift in workforce dynamics have forced businesses to embrace optimization and digital transformation as a topline mandate. 

    Computer screen showing html code demonstrating engineer script writing for low-code or high-code application development

    Driving digital transformation at any cost?

    Look no further than the rapid decentralization of corporate networks that came about as a necessity when physical offices closed in 2020. Businesses that hadn’t migrated some operations to the cloud, for instance, or developed enterprise maturity around remote access via VPN or SD-WAN felt the sting acutely when the pandemic forced them to change gears. 

    As a result, many of these businesses were forced to embark on rapid digital transformation that left a lasting impact on how the C-Suite prioritized innovation: If there’s another world-changing event akin to the pandemic, enterprises need to be ahead of the curve, not resting in a reactive posture that diminishes their market value. 

    But this rapid, pandemic-induced, global digital transformation elevated the conversation around Shadow IT markedly, as the remote workforce underlined how little control corporate IT teams had in protecting legacy enterprise systems in a work-from-anywhere world. When employees access corporate networks directly over the internet (DIA) –that is, without VPN, SASE, SD-WAN or other software-defined access protocols–there are already a litany of potential threats that can come from traditional cybersecurity concerns (ie. usurped firewalls). 

    It’s when IT teams aren’t able to supply the workforce with the tools and protections they need to work effectively in a digital-first world that non-IT will start taking matters into their own hands. If a corporate-licensed Microsoft Teams account constantly fails to deliver jitter-free conferencing, for instance, users may just deploy their own non-corporate Zoom to connect with co-workers in a pinch.

    This only scratches the surface when it comes to the potential for dangerous data sharing when employees are using non-approved collaboration software, for instance, or even sharing files over non-corporate email accounts or cloud drives. The reason many workers pursue Shadow IT like this in the first place is because the corporate-approved solutions are inadequate. The pandemic put existing inadequacies on blast and ultimately paved the way for more democratized IT decision making–if not outright a call for citizen developers

    Why low-code and no-code is different from Shadow IT

    This is the moment where it’s important to draw a baseline distinction between Shadow IT and true low-code/no-code tools. While Shadow IT is generally a secretive endeavor (whether or not intentionally), low-code and no-code solutions are most often third-party service providers that work both in approval and in collaboration with corporate IT. When certain areas of the business need a new application but lack the IT resources to derive it at speed and scale, a third-party provider can saddle up with the eventual end users to develop an ideal new solution. All of this can be done without stretching the resources of an already thin IT team. 

    Another distinction that’s necessary to make is that the introduction of low-code and no-code toolsets isn’t an indictment on corporate IT, either. To the most cynical-minded, the rise of the citizen developer could be seen as an against-all-threats bet on innovation by members of the C-Suite who fear falling behind on digital transformation. Even more cynically, putting these transformation efforts into the hands of non-corporate developers could be seen as “betting the house” for security-minded IT leaders who are beholden to the “on-premise or else” mantra that was pervasive pre-pandemic. 

    Instead, IT shouldn’t view the citizen developer as a short-sighted solution to much larger, potentially existential corporate challenges. Rather, IT teams need to be reimagined to be stewards of the network first–a mantle that became most enterprise IT’s top marching order since the start of the pandemic–with a split focus on ensuring workers can perform safely and effectively from wherever they log on. 

    Moving on from focusing on enabling safe and performant network access, each team across the corporation should have their own innovation mandate that allows them to explore low-code and no-code solutions knowing their network foundation is safe and effective. That’s all to say that innovation is no longer just the mandate of IT, and organizations need to be armed with the forward thinkers and tools across departments to think with digital transformation in mind. 

    It ultimately also comes down to teams choosing low-code and no-code partners that have a proven track record of success in delivering solutions at a faster pace than in-house development teams have been able to on their own. Given low-code and no-code are still a relatively nascent proposition (and on a massive growth trajectory), any areas of the business seeking out these partnerships need to simply be diligent in their vetting–not hasty in their want to deploy solutions for the sake of meeting an innovation mandate. 

  • Digitizing Lean: Putting humans at the center of Industry 4.0

    Digitizing Lean: Putting humans at the center of Industry 4.0

    Dynamics within the workforce are shifting, with employers struggling to accommodate a new generation’s shared mentality around how, where and why we do the jobs we do. 

    Nowhere is this more acute than in manufacturing and frontline operations, as Industry 4.0 (I4.0) technologies that have long promised to transform operations for the benefit of workers and businesses have—to date at least—largely failed to deliver. That’s because elements of I4.0 are often introduced without consideration for the humans steering them. This results in both missed targets and waste, as well as an outright negative view of I4.0 principles from workers who have so far been alienated by the concept.

    On the one hand, those entering the workforce are rightfully more introspective about the complexity and satisfaction of the jobs they perform in a post-pandemic world. To that end, these workers are accustomed to digital workflows that mirror the seamlessness and personalization of their digital lives as consumers—something that many I4.0 initiatives have so far failed to bring into the factory.

    On the other hand, the previous generation of workers are themselves both worn weary by the long-tail promise of I4.0, which so far has failed to improve their day-to-day, and the prospect of onboarding the incoming workforce with a toolkit that older workers may not have faith in.

    All of this only scratches the surface of the staffing challenges facing manufacturers, however, as there are expected to be 2.1 million unfulfilled manufacturing jobs by 2030.

    Taking staffing woes out of the equation, studies show that 70 percent of manufacturing errors are still attributed to humans, regardless of how much new technology has been adopted across the supply chain. This speaks to an even greater “people problem” for manufacturers, as it alludes to a flawed human-in-the-loop consideration at every step in the supply chain. 

    machine gears in industrial manufacturing setting demonstrate a complicated workflow

    Enabling Lean thinking across the supply chain

    At the heart of this disconnect is a lack of Lean thinking across manufacturing workflows. In many cases where I4.0 is failing to meet ROI, manufacturers aren’t arming stakeholders at each step in the supply chain with the tools they need to adopt this mentality organically.  

    One-piece flow, for instance, which speaks to a streamlined manufacturing process—from the factory all the way up to the end-user/customer—is an I4.0 concept that relies on synchronicity across systems to ensure Lean manufacturing principles are embraced at every step. 

    It’s that synchronicity that’s currently missing from many manufacturing workflows today, and which is ultimately hindering both the data and skill share necessary to seize upon the promise of I4.0.

    So how can manufacturers start bridging the gaps between Lean thinkers—who often live in the C-Suite—skilled workers and industrial engineers on the factory floor, and the various subject-matter experts (SMEs) across the larger supply chain? It all starts with combining advanced, digital manufacturing solutions with Lean techniques to identify vulnerabilities and areas for optimization within operations to:

    • Zero-in on root cause and reduce error rate across the supply chain
    • Reduce training time
    • Increase production yield
    • Achieve faster time-to-value and time-to-market.

    Brass tacks: What is Digitized Lean Manufacturing?

    At its core, the concept of Lean Manufacturing is centered around creating more value for customers while reducing waste. It encompasses a systematic framework for eliminating waste from a manufacturing system, or value stream, without sacrificing productivity. 

    In practice, Lean manufacturing alludes to 8 areas within the supply chain where waste proliferates, including:

    • Transport, ie. unnecessary steps in delivery
    • Inventory, which plays out in overstocked warehouses
    • Motion, referring to too many people or machinery involved in production
    • Waiting, whether that’s idle manpower or equipment
    • Overproduction of goods, usually as a result of poor planning
    • Over-processing, ie. spending excessive time designing unnecessary product features
    • Defects, calling for unplanned costs and effort
    • And Unutilized Talent.

    While any of these waste areas can proliferate in isolation, this last factor—Unutilized Talent—tends to be the inciting element that sends the other seven factors in motion. If manufacturers have workers who aren’t empowered to efficiently and effectively handle each of these respective workflows from the start, waste and inefficiencies are almost inevitable. 

    This tees up a cycle that many in the supply chain space are all too familiar with, which ultimately impacts not just the ability for a manufacturer to be Lean, but drives up operational expenditures.

    Digitizing Lean involves marrying strategies that prevent waste in these key areas with advanced digital technologies and analytics that promote Continuous Improvement and process visibility. This includes the introduction of tools that enable active, automated data collection at each step of the manufacturing workflow to remove manual tracking that may otherwise take place in siloes. 

    Barriers to Digitizing Lean

    Despite holding so much promise, there have been significant institutional barriers in Digitizing Lean strategies to date. Among these is the age-old skepticism about the security of new solutions. 

    Many manufacturing processes are custom and unique—if not fully proprietary—and while certain tried-and-true processes may not be digitally advanced, they’ve proven effective and secure enough for decision makers to not want to rock the boat.

    To that end, skepticism over introducing cloud technology within environments where sensitive data can be openly shared still remains among many decision makers, despite the efficacy of cloud solutions being well established across frontline operations. Instead, many teams have continued funneling money into legacy ERP systems that, while custom, aren’t effectively driving Continuous Improvement.

    That’s because without the end-to-end visibility of a Digitized Lean workflow, SMEs continue to remain in siloes, which enforces rigid implementation and hides opportunities for preventable downtime and improved efficiency. 

    There’s also apprehension about going “all-or-nothing” on I4.0 initiatives, which taken as a whole can be a huge undertaking for organizations. But when there is collaboration across workflows that allows Lean Thinkers and more industrial engineers to share knowledge and skills, Continuous Improvement and digital transformation will start to take hold almost organically. 

    It all comes down to giving teams the tools they need to bring all members into a Lean Thinking mentality while giving them contextual visibility into the entire workflow. While Digitizing Lean doesn’t have to be an all-or-nothing pursuit at the start, there are some key areas where digital lean transformation may be challenging to launch, but payoff can be significant, including:

    • Andon Systems: Creating systems that signal downtime require a lot of custom technology—and frankly, work—that hinges on specific software, hardware and subject matter expertise. A platform like Tulip, however, enables the industrial engineers tasked with creating these systems to become Lean thinkers, leveraging the data they’re collecting throughout the process to create applications that can more intelligently alert to downtime and map out resolution from a single workstation.
    • Kanban: Similarly with Kanban (or any other kind of inventory management system), a single-pane-of-glass that centralizes data collection and actually automates manual tracking is simply more effective and actionable than legacy systems, which are often manual and by default siloed.
    • Motion study: If a production line has stopped for some reason, what does an operator need to do? While standard operating procedure dictates that these line workers will need to go to maintenance when downtime occurs, Lean thinking needs to be folded into this process to track how quickly and effectively these remediations are being executed to remove redundancies in future scenarios. 
    • Time studies: Improving speed without compromising quality is one of the key facets of Lean thinking. Paradoxically, the process of manually conducting time studies can itself be a time-drain before the results are even put to analysis. With platforms that digitize and synchronize this whole process, motion and time studies can be conducted almost passively, allowing stakeholders to share details about downtime while deriving actionable insights around removing redundancies.

    Industrial engineers need access to all of this data in concert to create successful applications that execute on these Lean thinking strategies. A centralized platform not only makes it easier to track and share all of this data, but opens the door for greater collaboration across SMEs, C-level Lean thinkers and industrial engineers. 

    The value of this knowledge and skillshare cannot be underestimated, as it lends critical context to every step of the supply chain that can deliver actionable insights almost immediately. 

    Avoiding pitfalls when Digitizing Lean

    While synchronous technology is critical, the success of Lean digitization still hinges on the humans spearheading these initiatives. Poorly targeted technology projects and process improvements of any kind can lead to program fatigue and negative returns—not to mention a lack of faith in new processes from stakeholders at every step of the workflow. 

    To ensure successful digital Lean implementation, teams need to embrace the four following tenets:

    1. Value first, technology second. Focus on solving urgent problems, not exploring the technology, when proposing Digitized Lean solutions. When teams come to a project with clear, measurable goals that are tied to solving a real pain point, the most valuable features of the new tech will come to light organically. If, instead, a team is focused on testing out all components of the new tech without a clear problem to solve in mind, there will inevitably be skepticism about the true value a solution delivers. 
    2. Identify the right starting point: Teams must evaluate the people, process and technological readiness within a line and be as specific as possible. Engineers need to be armed with the right tools and training to execute on these processes, and it’s important to understand what existing gaps in expertise or strategy need to be filled for Lean thinking to be shared across teams. 
    3. Secure stakeholder buy-in: Develop user personas for each stakeholder, conveying how a new tool can add value to their daily job. If the person performing a specific job isn’t empowered to raise flags when issues occur or pull in other stakeholders for help in the process, continued isolation will inevitably turn this worker sour on the process. With access to a robust collaboration toolset, this can be avoided. 
    4. Focus on gradual growth: A digital Lean transformation uses a composable approach to identify critical issues, determine measurable goals, deploy short pilots one-at-a-time, and iterate to improve. Leaders need to focus on the low hanging fruit at the start of digital Lean transformation to add the most value with the least effort. This will in turn help improve buy-in, inform starting points for future projects, and help teams better visualize the value of digitization from the start.

    By deploying Digitized Lean in a considered, gradual approach, teams can start making data-based decisions far more efficiently than legacy, manual Lean processes alone ever could. 

    Digitized Lean focuses on removing the presence of Unutilized Talent by arming stakeholders across the workflow with the tools they need to be more efficient and analytical at every step. Central to this is the ability for new Digitized Lean solutions to scale wider and faster than manual processes ever could, which has been the biggest barrier to ROI for I4.0 to date.
    By being able to solve problems today that manufacturers and frontline operations have failed to in the past, Digitized Lean solutions can help finally start accelerating wide-scale I4.0 transformation and the long-promised delivery of the Future Factory. Perhaps best of all, it empowers workers across generations and stations to start thinking Lean and taking more ownership of their day-to-day tasks.

  • The AI Bill of Rights, Explained

    The AI Bill of Rights, Explained

    The Biden Administration’s AI Bill of Rights is the latest pitch for greater data protections at the federal level as enterprises—and foreign states—adopt data-dependent machine learning systems to evolve their operations. 

    Announced in October 2022, this latest framework builds on the core principles for tech accountability and reform that the U.S. government has attempted to define in fits and starts for more than a decade. While recent legislation—including The Chips and Science Act—have funneled federal dollars into building cutting-edge technologies stateside, enforceable guidance on how to responsibly vet and manage new tech has not kept pace.

    This absence of federal guidance has been especially glaring as data-driven solutions have skyrocketed in prominence within the enterprise space, promising more informed decision making and automated systems via artificial intelligence (AI) and machine learning (ML) principles. This new framework marries themes from previous, non-U.S. legislation around data privacy with guidance from innovators across the AI space, all through a lens of social justice and equity that many experts argue hasn’t been prioritized to date. 

    machine learning robot cheers red wine with human demonstrating computer vision and artificial intelligence

    What is the Blueprint for an AI Bill of Rights?

    This yet-unenforceable Blueprint is broken out into five principles that any organization developing or using artificial intelligence (AI) should adhere to:

    1. Safe and Effective Systems: Citizens shouldn’t be exposed to untested or poorly qualified AI systems that could have unsafe outcomes—whether to individuals personally, specific communities, or to the operations leveraging individual data.
    2. Algorithmic Discrimination Protections: Simply put, AI models can’t be designed with bias in mind, nor should systems be deployed that haven’t been vetted for potential discrimination.
    3. Data Privacy: Organizations mustn’t engage in abusive data practices, nor should the use of surveillance technologies go unchecked. 
    4. Notice and Explanation: Individuals should always be informed when (and how) their data is being used and how it will affect outcomes. 
    5. Human Alternatives, Consideration, and Fallback: Individuals should not only have authority to opt-out of data collection, but there should be a human practitioner they can turn to when concerns arise. 

    Why is an AI data framework important?

    While the new guidelines are delivered under the banner of AI, they’re more akin to the broad-based consumer ‘Bill of Rights’ delivered by the European Union back in 2018 via the General Data Protection Regulation (GDPR). Anyone working in tech—or, realistically, the enterprise space—in the 2010s will be familiar with GDPR, as it placed an enforceable framework around data privacy that had been startlingly absent prior to the GDPR’s original iteration first pitched in 2016. 

    With billions in fines levied against Big Tech since GDPR became law in 2018, it remains among the only globally-significant legislative measures in place that gives individuals ownership of their personal information. (***Editors note: EU legislators are hard at work crafting their own AI-specific regulations that build on the principals already in action via GDPR.***

    Still, there remains to be any significant data protection laws on the federal level stateside, even though California’s CCPA and similar legislation in Illinois and other states offer a template federal legislators could follow. Instead, most companies collecting data stateside have to voluntarily adhere to ethical standards for data collection: In fact, most global enterprises simply apply the same protections and permissions to U.S. data that they do to data collected in the EU by default. 

    This isn’t to say there are no protections for personal identification information (PII) in the U.S., as HIPAA and state-level legislation helps ensure companies aren’t abusing their access to sensitive healthcare information. But things get tricky as the breadth and variety of data businesses are capable of collecting accelerate on a massive scale, alongside the number of applications for AI and ML driving modern enterprise transformation.

    AI in the Enterprise

    What makes it critical for there to be an AI-specific ‘Bill of Rights’ is that the scale and variety of data collected for many enterprise-grade AI applications far exceeds what one would comfortably call “general data.” 

    One clear example is the rise of computer vision in the enterprise space. Computer vision applications involve taking any form of visual data—image or video, generally—and creating deep learning models that can derive powerful business insights when crafted and managed thoughtfully. These can be used for monitoring an assembly line, for instance, with a computer vision model trained to detect mislabelled or damaged goods, or to track shelf stock levels in retail, with models trained to recognize inventory levels. 

    However, some of the most powerful computer vision use cases have come from tracking actual human beings. At the height of the pandemic, for instance, computer vision models were trained to recognize when workers weren’t wearing appropriate personal protective equipment (PPE), enabling compliance detection from a socially-safe distance. 

    The risks in these scenarios for both humans being tracked and the companies managing computer vision models are manifold.

    For starters, visual data is unstructured, meaning that there is no inherent description or classifiable differentiators between one image or another that computer vision models can “learn” from. Instead, there is an explicit human-in-the-loop (HITL) element to designing computer vision models that calls for data managers to apply descriptions to an image (ie. an unmasked worker versus a masked one) to inform the results of a given model. 

    This process of data labeling (or annotation) is only the start of the HITL cycle, as managers need to maintain these models long-term to ensure they continue to perform as expected. Bad data—ie. Mislabeled imagery—could enter the model that informs inaccurate outcomes if data managers aren’t vetting inputs.

    Another phenomenon—known as data bias—could also impact machine learning models of all types (computer vision or otherwise) if those tasked with managing these algorithms long-term aren’t applying their own ethical and responsible standards to data collection and vetting. 

    Aimed at limiting data bias (and that’s just the start)

    Facial recognition technologies, for instance, present a clear-cut use case for what data bias in action could portend. Numerous studies have shown that many of the advancements in facial recognition have been influenced by racially-unbalanced data sets, which have shown to disadvantage minority groups when these applications are deployed for surveillance, policing or even to vet homeownership candidates. 

    These headline-grabbing use cases are a throughline across the U.S.’s Blueprint for an AI Bill of Rights, as the framework promotes not just more effective AI systems, but non-discriminatory AI first and foremost. 

    However, the language of the blueprint emphasizes that “the technical capabilities and specific definitions of [AI systems] change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools.” 

    This caveat is important to note, as it more or less acknowledges the inability of the federal government to keep pace with understanding the potential implications of the technologies that both fuel our economy and transform our society. On the one hand, AI developers have so far largely been able to develop and deploy new solutions without the red tape that might otherwise hinder innovation in the states. 

    But as we’re learning today with more and more AI solutions changing their mission (ie. IBM Watson) or switching gears entirely (ie. Zillow), taking a more considered approach to developing AI solutions that’s focused on responsible data management will hopefully improve the efficacy and safety of these solutions long term