When Elon Musk’s artificial-intelligence company, xAI, doubled its valuation to $45 billion in just a few months, the leap underscored the immense value and attention that AI ventures command. Much of that energy focuses on sweeping questions: Could AI surpass human control? Will it displace millions of jobs? While these are critical concerns, they overshadow a more-immediate reality: AI already shapes key decisions in our lives, such as determining loan approvals and allocating public-health resources.
AI systems often lack input from experts who deeply understand the specific domains they are designed to influence. This oversight leads to algorithms that amplify existing biases, neglect important local contexts, and reinforce inequities. The urgency of these changes cannot be overstated: AI is already influencing critical decisions, including health-care access and climate interventions, and poorly designed systems can harm vulnerable populations, worsening inequities.
Predictive policing algorithms, designed to predict where crimes are likely to occur or which individuals may be involved in criminal activity, can disproportionately target minority communities. Urban-planning models may fail to account for unique neighborhood dynamics, such as how residents use green spaces, interact with transportation systems, and adapt their behavior based on the accessibility of sidewalks, parks, and other infrastructure, leading to misaligned infrastructure-investment recommendations. In health care, algorithms designed to allocate resources based on historical spending patterns rather than medical need have prioritized white patients with higher costs for extra care management over sicker Black patients, worsening disparities.
Simply urging computer scientists to be “ethical” without incorporating knowledge from other fields is insufficient, as ethical considerations require deep understanding of social, cultural, and human contexts. To create AI systems that truly serve humanity, we need experts from all fields, not just computer science, actively involved in their development. This necessitates a major overhaul of AI education.
AI systems often lack input from experts who deeply understand the specific domains they are designed to influence.
AI education must become a foundational part of how we educate future leaders across all disciplines. Colleges are uniquely positioned to drive this transformation. By introducing broad, accessible courses on AI and data for all students, regardless of their major or previous experience, we can foster a universal ability to contribute to responsible technology development and its productive use. Imagine public-health students designing machine-learning models to predict and respond to disease outbreaks; those models would incorporate their field expertise and knowledge of population-level health-policy requirements or mechanisms of social determinants of health — factors that computer scientists might overlook. Similarly, future teachers could gain the skills to evaluate and refine AI-driven educational tools, ensuring they align with pedagogical goals and meet the nuanced needs of diverse learners in real-world classrooms.
Equipping professionals in fields like health, education, and environmental science with AI skills is essential for ensuring that these systems serve the public good. Public-sector departments, such as city planning or public health, often lack the resources to attract tech professionals, creating gaps in how AI tools are used. Public-health workers might use AI to predict disease outbreaks based on data from social media and emergency-room visits but fail to recognize that an algorithm overrepresents urban data, leading to skewed resource allocation and neglect of rural areas. Similarly, social workers using AI to predict child-abuse risks might inadvertently target marginalized and underserved families, reinforcing systemic inequities.
The lack of interdisciplinary collaboration exacerbates these issues. AI education in STEM fields often overlooks ethical, societal, and field-specific considerations. This disconnect can result in technologies that misinterpret data or perpetuate disparities: Biased data in loan-approval algorithms have denied credit to Black and brown people. Facial-recognition systems have been shown to misidentify individuals with darker skin tones, leading to unfair outcomes in law enforcement.
To ensure AI systems are both effective and fair, collaboration across disciplines is essential. Experts in fields like public health, who deeply understand systemic inequities and social determinants, must play a central role in guiding the development of ethical and equitable algorithms. Without their expertise, computer scientists often lack the nuanced understanding needed to take on these complex societal challenges. At the same time, technologists must work with domain experts to understand the societal impact of their work; such collaborations will foster a more-comprehensive approach to responsible AI development.
As AI evolves faster than our capacity to regulate or educate, we risk entrenching a system where its benefits are accessible only to a narrow elite. We need a cultural shift that sees AI as a shared responsibility, not just a technical domain. AI is already shaping us. The question is: Who will shape AI?