From AI to responsible AI – the future? – financialexpress
As artificial intelligence(AI) takes a wrong turn with the misuse of deepfakes and generative AI, among others, experts believe that responsible AI can aid in this. It is believed it not only fosters trust, transparency and societal well-being but can also streamline operations, enhance decision-making processes and ensure compliance. The collaboration of AI with regulatory structures is important, as the former empowers efficiency, while the latter safeguards against unintended consequences, biases and possible harm to humanity.
“AI can be a consequential technological advancement with the potential to carry high stakes. Any powerful tech can be misused as evidenced by the recent spate of deepfake crimes. This is the time for AI tool companies to factor in ethics, diversity and inclusivity while designing algorithms and make sure that the AI platform does not reflect any harmful stereotypes,” Atul Rai, co-founder, CEO, Staqu Technologies, an AI-based startup, told FE-TransformX.
Book A Free Demo
The market for AI can reach about $100 billion and is expected to grow twentyfold by 2030, as per insights from Next Move Strategy Consulting, a market research platform. The AI market can cover sectors such as supply chains, marketing, product making, research and analysis, among others. Experts believe that more sectors are expected to adopt artificial intelligence within their business structures. Reportedly, chatbots, image-generating AI, and mobile applications can be the major trends in improving AI in the coming years, as per insights from Statista.
Reportedly, the National Strategy for AI highlighted the need for effective policies and standards to mitigate AI-based risks. The ever-evolving nature of AI can require a robust framework of policies to prevent misuse and bias. An AI-first culture, inherently people-first, is expected to empower human ingenuity and strengthen the relationship between people and technology.
With the rise of automated technology, people need to maintain a balance between regulations and automation through responsible AI. Respondents of AI high performers are nearly eight times and their peers say their organisations spend at least 20% of their digital-technology budgets on AI-related technologies, as per McKinsey, a market research platform.
How much responsibility can AI take?
Industry experts believe that ethical lapses in AI can lead to discrimination and biased decision-making, affecting sectors such as the recruitment industry. As AI becomes more autonomous determining responsibility for errors can be a challenge. “ Responsible AI can help to protect privacy and balance ethical considerations with innovation.
Also, the important factors to be looked at are interpretability and accountability. It also has the potential to recognise human psychology and its limitations. Moreover, responsible AI has the potential to develop and deploy AI systems with a focus on ethics and transparency, among others, in business structures.
AI is expected to redefine industries and economies today. In this light, responsible AI can emerge as a crucial aspect emphasising ethical and accountable use. This adaptation of AI in businesses is expected to be an ongoing process but most sectors expect that AI adoption will grow within their business in the coming years.
AI can be a critical factor in 49% of IT-related enterprises by 2025, as per insights from Statista. “ However, there are also ethical concerns, biases in algorithms and potential job displacement (especially at the lower end) which can be addressed by adapting and improving responsible AI frameworks. There is a need to strike a balance between innovation and ethical considerations. The implications can impact not just technology but the fabric of society.
Can AI be responsible?
When misused, AI can cause problems such as being unfair, invading privacy, and making existing inequalities worse, among others. Using AI responsibly can keep businesses from reputational damage and legal ramifications. To do better with AI, it’s important to keep up with evolving ethical standards, regularly update protocols, and ensure everyone in the organisation knows about acting ethically. “In a broad sense, AI can represent technological innovation and automation, but responsible AI adds an essential layer of consciousness to the development and deployment of these technologies. It’s about acknowledging that as the power of AI is unlocked, it must be done responsibly.
Experts believe that with the Digital Personal Data Protection Act, 2023, there can be safeguards against individuals’ personal data being processed by AI systems without informed consent. The government is also expected to work on the Digital India Act which will have provisions to regulate AI intermediaries and high-risk AI systems.
Reportedly, the government expects the Digital India Act to be a principle-based framework which can be tailored through rules to address AI and other emerging technologies as they develop. “While such framework with executive rules can enable government to adapt quickly to address issues when they arise, this can also help to predict and could result in a more accountable legal environment. It is yet to see how the Digital India Act will take form and how the government plans to address these issues, with responsible AI.
Read the original article on Financial Express