How to use AI for good

Social media was the first run of humanity with AI and according to technological ethics Tristan Harris, this test failed terrible Atlantic “The closest thing is Silicone Valley conscience.” A recent study that the gen z was about half of the respondents was never invented in social media. Again, 60% still spend at least four hours a day on these platforms.

Violent, social anxiety, addiction, polarization and misinterpretation and social media, have become a disturbing speech cocktail. With Genai, we have a second chance to ensure the use of technology responsibly.

But it proves that it is difficult. Main AI companies now receive joint approaches to address management problems. Recently, Openai, AI models will apply the Anthropic model context protocol to connect to information sources that have become an industrial norm with Google.

There are unexpected benefits and results with any new technology. As he put Harris, “Whatever our authority is like a kind, AI increases him to an exponential degree.”

Genay helps us to do more than before, there are dangers. Apparently, a large language model (LLM) can be managed by evil actors to create malicious content by bad actors or to be jailbroken to write a malicious code. How do we avoid these harmful use in use this powerful technology? Three approaches, each is possible with its essence and shortcomings.

3 ways to benefit from AI while avoiding damage 

Option # 1: Government adjustment 

The car brought both comfort and tragedy. We responded with rapid limits, seat belts and rules, a process that extends for more than a century.

World lawmakers are trying similar to AI. The European Union is a leader with the AI ​​Act entered into force in August 2024. The application is staged, some provisions of some provisions from February 2025, such as some provisions, social goals and face recognition data are “unacceptable risk”.

However, there are difficulties in these rules. European technological leaders are concerned about the fact that the penalty EU measures can be withdrawn from the Trump management. At the same time, the regulation of the United States is developing as a patch of state and federal initiatives, and the countries like Colorado apply their comprehensive AI laws.

EU EU ACT’s implementation schedule shows this complex: Some bans started in February 2025, in nine months of entering experience, 12 months in the rules of 12 months, in high-risk systems.

There is a real concern: Excessive arrangement can simply change the development of another place. The establishment of the functional LLM model spends hundreds of millions of dollars to many countries.

Although the settlement is the place, the process is currently very flawed to develop good rules. AI is developing very quickly and the industry is involved in a lot of investment. The rules that arise are not a risk or meaningful effect of innovative innovation.

Thus, if the government’s regulation is not Panacea for AI’s dangers, what will help?

Option # 2: Social exit 

Educators are struggling with Genay and academic honesty. Some want to block AI completely, while others see opportunities to give the students fighting with traditional pedagogy.

Imagine, the presence of a teacher who answers any questions but also to complete your tasks. As Satya Nadella recently puts in DWarkesh Podcast, a new workflow “think with the AI ​​and work with my colleagues.” This cooperation approach to the use of AI can be a model for educational parameters served as a meditation tutor instead of replacing AI learning.

In homes, schools, online forums and government, society must be settled with this technology and decide what is acceptable. Everyone deserves a vote in these conversations. Unfortunately, Internet talks are often converted to trade sound bites without context or nuance.

We must educate ourselves for meaningful conversations. We need effective channels for the directions of use of people safe and effective AI, perhaps for public access.

Option # 3: Third party appraisers 

Before the 2008 financial crisis, credit rating agencies have set AAA ratings that contribute to the EAA rating to the economic disaster. The problem? Self-interest in industry.

When it comes to AI regulators, of course, we manage the risk of a subtle returning door that hurt more than good. It doesn’t have to be like this.

Meaningful and thoughtful research goes to AI certificates and third party appraisers. In the Paper AI Certificate: Develop ethical experience with reduction, Peter Ciham et al. Suggest several concepts.

First, because AI technology progresses so quickly, the AI ​​certificate should emphasize the evergreen principles as ethics for AI developers.

Second, the AI ​​certificate today does not nuance for certain conditions, geographies or industry. Only certification is not homogeneous, but many programs are treated as “monolithic technology” than many programs, if the AI’s facial recognition, LLMS and anomaly detection.

Finally, customers should require high quality certificates to see good results. They must study about technology and related ethics and security issues.

The way forward 

The forward road prevents multifaceted conversations, multifaceted conversations and AI dangers on the goals of society. If the government is converted to the standard regulator, we risk an unusual marketplace or a meaningless rubber seal.

Independent third party appraisers offer the best way combined with informed social discussion. But we must teach ourselves about the dangers and realities of this strong technology, or we will repeat social media errors on a grandder scale.

Peter Wang is the head of the head AI and an innovation officer in Anaconda. 

Leave a Comment