Beyond the Model — Why Responsible AI Must Address Workforce Impact
For the fifth consecutive year, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide.
In prior years, they examined organizational RAI maturity, third-party, generative and agentic AI risks; and core AI governance pillars, including accountability, explainability and oversight. Since the project began, AI use has rapidly spread among organizations of every size, sector and geography. At the same time, early fears have begun to materialize related to its impact on the workforce, with several companies announcing substantial layoffs while citing AI-enabled efficiency gains.
Given the growing concerns over how much human workers will be affected by AI, the group asked its panel to react to the following provocation: Responsible AI practice should address workforce impact, not just AI system risk. Nearly 80 percent of their panelists agree or strongly agree with the statement. The panel previously highlighted that sound AI governance asks not only how a technology is designed or deployed but whether it should be used at all. This year's panel extended that logic, stressing that responsible AI must look beyond safe systems to the real-world consequences for workers and economic stability.
Please select this link to read the complete article from MIT Sloan Business Review.