<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Curbing AI&#8217;s Potential Dark Side: A Case Study on Regulating AI Misuse	</title>
	<atom:link href="https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/feed/" rel="self" type="application/rss+xml" />
	<link>https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/</link>
	<description>Thomas Frey Google&#039;s Top Rated Futurist Speaker</description>
	<lastBuildDate>Wed, 09 Aug 2023 10:17:54 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.5</generator>
	<item>
		<title>
		By: Tyrion Lannister		</title>
		<link>https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/#comment-100552</link>

		<dc:creator><![CDATA[Tyrion Lannister]]></dc:creator>
		<pubDate>Wed, 09 Aug 2023 10:17:54 +0000</pubDate>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=39277#comment-100552</guid>

					<description><![CDATA[Great Information. Thanks for sharing]]></description>
			<content:encoded><![CDATA[<p>Great Information. Thanks for sharing</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: About Creativity		</title>
		<link>https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/#comment-99976</link>

		<dc:creator><![CDATA[About Creativity]]></dc:creator>
		<pubDate>Thu, 13 Jul 2023 23:34:27 +0000</pubDate>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=39277#comment-99976</guid>

					<description><![CDATA[That&#039;s Good, Keep Going!  ✔        &quot;Category 2: Communication Disruption
1. Deploy a bot to flood Mike’s email with spam.&quot;              Note:  I have had this experience with YouTube in that the liking of my comments was not received well and in turn my spam folder went up in numbers.  It may not have been a bot that did the spamming but, all the same. For the most part, I still enjoy commenting on YT. I do not go on any of the other social media platforms for this very reason too. I will continue to find ways to be (individual) me on the planet. Most important to think for myself about what I experience and see on the planet. Think and Grow Rich, by Napoleon Hill is a good book, chapter 8 is an option on thinking.  I have moved from California, Arizona, Oregon, Maine, Portugal, Greece, and now Thailand. Loving this 80 F weather all year round. see how it goes. I think the &quot;Line&quot; is a new idea full of options. Good Day.]]></description>
			<content:encoded><![CDATA[<p>That&#8217;s Good, Keep Going!  ✔        &#8220;Category 2: Communication Disruption<br />
1. Deploy a bot to flood Mike’s email with spam.&#8221;              Note:  I have had this experience with YouTube in that the liking of my comments was not received well and in turn my spam folder went up in numbers.  It may not have been a bot that did the spamming but, all the same. For the most part, I still enjoy commenting on YT. I do not go on any of the other social media platforms for this very reason too. I will continue to find ways to be (individual) me on the planet. Most important to think for myself about what I experience and see on the planet. Think and Grow Rich, by Napoleon Hill is a good book, chapter 8 is an option on thinking.  I have moved from California, Arizona, Oregon, Maine, Portugal, Greece, and now Thailand. Loving this 80 F weather all year round. see how it goes. I think the &#8220;Line&#8221; is a new idea full of options. Good Day.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Jay Jones		</title>
		<link>https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/#comment-99597</link>

		<dc:creator><![CDATA[Jay Jones]]></dc:creator>
		<pubDate>Sun, 25 Jun 2023 13:38:09 +0000</pubDate>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=39277#comment-99597</guid>

					<description><![CDATA[good article Tom, and glad to see that the use  of AI can be a positive thing as well as negative depending on our moral choices. The same could be said for the new digital government currency they are creating....potential for good or bad...how do you make sure the ethical rules are made and enforced?]]></description>
			<content:encoded><![CDATA[<p>good article Tom, and glad to see that the use  of AI can be a positive thing as well as negative depending on our moral choices. The same could be said for the new digital government currency they are creating&#8230;.potential for good or bad&#8230;how do you make sure the ethical rules are made and enforced?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Phil Lawson		</title>
		<link>https://futuristspeaker.com/artificial-intelligence/curbing-ai-potential-dark-side-a-case-study-on-regulating-ai-misuse/#comment-99381</link>

		<dc:creator><![CDATA[Phil Lawson]]></dc:creator>
		<pubDate>Thu, 15 Jun 2023 21:52:03 +0000</pubDate>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=39277#comment-99381</guid>

					<description><![CDATA[Hey Tom, excellent piece about outlining what could, will and is happening with AI. And I am a life-long fan of the potential for tech, specifically AI, which has been my passion for years. But when we realize that your post outlines exactly what nations and disruptive segments of society are doing, have been doing, and will exponentially accelerate against their real, or imagined enemies, we start to recognize more about the true scope of the AI challenge. Regretfully the challenge is even greater. 

LLMs and their chat applications can seem friendly, nice, intelligent and would appear to be wonderful in assisting humans with challenges we face in our well-being and mental health. Much needed as 90% of America’s feel America is in a mental health crisis, and it is. While research reports that there is a critical shortage of mental health workers, from somewhere between a quarter of a million to nearly four and a half million shortfall of professionals. And it takes many years to train more. 
 
Tech that could help would be wonderful. But AIs/LLMs etc. are not intelligent as we humans are, they cannot reason as humans’ reason and hence “it” can be and is very dangerous, uncontrolled and uncontrollable. Laws and regulations are a vital part of what is needed, but laws and regulations have not stopped social media from harming people, impacting elections, facilitating revolutions and ethnic cleansing.

Tech leaders make billions by minimizing and ignoring these real dangers and attempting to minimize failures by the attempted personification of AI saying it is ‘hallucinating” when it is simply catastrophically failing and they have no clue why, or how to stop it. AI can, and hopefully will be, a wonderful advance for humanity, but not as it is now being done.  Though it can be made safer. Society would greatly benefit from good, safe, useful, truthful, accurate, ethical and non-addictive technoloy. There is no need to release AI in an unsafe iteration onto civilization, except for a small handful of people who believe the ‘winner’ of this tech battle takes all, will make billions and be in control.]]></description>
			<content:encoded><![CDATA[<p>Hey Tom, excellent piece about outlining what could, will and is happening with AI. And I am a life-long fan of the potential for tech, specifically AI, which has been my passion for years. But when we realize that your post outlines exactly what nations and disruptive segments of society are doing, have been doing, and will exponentially accelerate against their real, or imagined enemies, we start to recognize more about the true scope of the AI challenge. Regretfully the challenge is even greater. </p>
<p>LLMs and their chat applications can seem friendly, nice, intelligent and would appear to be wonderful in assisting humans with challenges we face in our well-being and mental health. Much needed as 90% of America’s feel America is in a mental health crisis, and it is. While research reports that there is a critical shortage of mental health workers, from somewhere between a quarter of a million to nearly four and a half million shortfall of professionals. And it takes many years to train more. </p>
<p>Tech that could help would be wonderful. But AIs/LLMs etc. are not intelligent as we humans are, they cannot reason as humans’ reason and hence “it” can be and is very dangerous, uncontrolled and uncontrollable. Laws and regulations are a vital part of what is needed, but laws and regulations have not stopped social media from harming people, impacting elections, facilitating revolutions and ethnic cleansing.</p>
<p>Tech leaders make billions by minimizing and ignoring these real dangers and attempting to minimize failures by the attempted personification of AI saying it is ‘hallucinating” when it is simply catastrophically failing and they have no clue why, or how to stop it. AI can, and hopefully will be, a wonderful advance for humanity, but not as it is now being done.  Though it can be made safer. Society would greatly benefit from good, safe, useful, truthful, accurate, ethical and non-addictive technoloy. There is no need to release AI in an unsafe iteration onto civilization, except for a small handful of people who believe the ‘winner’ of this tech battle takes all, will make billions and be in control.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 

Served from: futuristspeaker.com @ 2026-04-07 12:00:58 by W3 Total Cache
-->