As artificial intelligence (AI) continues to revolutionize industries, its ability to address global challenges like inequality, poverty, and climate change positions it as a transformative force for good. However, concerns and mistrust among the public pose significant barriers to its widespread adoption. A 2023 YouGov survey revealed that 49% of US respondents were concerned about AI, with 22% admitting to being scared of its potential implications.
Amid these fears, Raj Sharma, Global Managing Partner for Growth and Innovation at EY, reflects on the importance of transparency and responsible frameworks in building trust in AI systems. In a piece shared during the World Economic Forum’s 2025 Annual Meeting, Sharma outlines six key strategies for organizations to address skepticism and foster confidence in AI:
1. Develop Responsible Frameworks: Organizations should adopt frameworks like the NIST AI Risk Management Framework or create internal guidelines to ensure AI is deployed fairly, safely, and inclusively. Trust grows when stakeholders believe in an organization’s commitment to ethical AI practices.
2. Engage Stakeholders Early: Transparency begins with involving clients, employees, and other stakeholders in the design and deployment process. Proactive communication about AI’s purpose and benefits helps alleviate fears and garner buy-in.
3. Prioritize Diversity in Development: A diverse team minimizes bias, a key risk in AI development. Diverse perspectives also ensure AI tools meet a broader range of needs and expectations.
4. Foster Collaboration Across Teams: Deploying AI requires a collective effort involving data scientists, HR, legal teams, and end-users. Collaboration enhances transparency and encourages responsible usage of AI systems.
5. Apply Robust Governance: Boards must define AI risk appetites, oversee strategy, and enforce proper guardrails. Governance ensures accountability and alignment with organizational goals.
6. Embed Algorithmic Guardrails: Guardrails safeguard against risks like hallucinations and toxic outputs, promoting ethical AI operations.
Sharma emphasizes that transparency means more than revealing AI’s technical functions; it involves clear communication about AI’s objectives, impacts, and decision-making processes. He warns that secrecy and exclusivity in AI development risk eroding trust, leading to resistance and missed opportunities to transform businesses and lives.
The call to action is clear: organizations must prioritize transparency and inclusivity in AI implementation. Only then can we overcome skepticism and harness AI’s full potential to drive economic growth and improve global well-being.
Discover more from Ameh News
Subscribe to get the latest posts sent to your email.




