Digital

Reality Check for Generative AI: Hype vs. Obstacles in 2024

Picture Source: BeInCrypto

As the buzz around generative AI continues to grow, with promises of revolutionary advancements and potential pitfalls, analyst firm CCS Insight suggests that 2024 will bring a much-needed reality check to this rapidly evolving sector. According to their predictions, fading hype, rising costs, and increasing calls for regulation will likely lead to a slowdown in the generative AI field.

Generative AI: Hype and Challenges

Ben Wood, Chief Analyst at CCS Insight, believes that the generative AI sector is currently experiencing a wave of hype that needs to be tempered with a dose of reality. He remarks, “The bottom line is, right now, everyone’s talking generative AI…But the hype around generative AI in 2023 has just been so immense that we think it’s overhyped, and there are lots of obstacles that need to be overcome to bring it to market.”

One of the primary challenges cited is the complexity and high costs associated with deploying and maintaining generative AI models, such as OpenAI’s ChatGPT and Google Bard. The immense financial burden of these technologies could deter many organizations and developers from harnessing their potential.

AI Regulation on the Horizon

The predicted reality check for generative AI extends beyond financial concerns. The rapid pace of AI advancements is expected to pose a significant challenge to AI regulation, particularly in the European Union. The EU is working on introducing specific regulations for AI, but CCS Insight anticipates that revisions will be necessary, with legislation not likely to be finalized until late 2024.

In light of the EU’s proposed AI Act, there has been considerable controversy within the AI community, leading major AI companies to advocate for different approaches to regulation. This has sparked a crucial debate regarding the ethical use of AI, with just 13% of consumers fully trusting companies to employ AI ethically, according to a Salesforce report.

Read More: Crypto Markets Brace for Volatility Amid Israel-Hamas Conflict

Prioritizing Ethical AI

The same report reveals that 80% of consumers believe it is essential for a human to validate the output generated by an AI tool. In an atmosphere of growing trust concerns, companies are increasingly prioritizing data security, transparency, and ethical AI use. Paula Goldman, Chief Ethical Officer at Salesforce, emphasizes the importance of safeguarding customer data and trust, stating, “Companies may need data as much as ever, but the best thing they can do to protect customers is to build methodologies that prioritize keeping that data — and their customers’ trust — safe.”

In summary, while generative AI holds great promise, the year 2024 is expected to bring a reality check, with challenges related to cost, regulation, and ethical concerns coming to the forefront. Balancing innovation with responsibility and ethical considerations will be key as the generative AI sector continues to evolve.