Thoughts on Strong AI and Current Language Models like ChatGPT

So I have been looking at AI for about 20 years, mostly as an enthusiast or amateur. I have an engineering degree and do software engineering now I follow the AI field but not my core focus. I read papers from time to time. We may been doing machine learning at the company but ML is ML and ChatGPT are ...just that.

A little history and thoughts.

So I wrote small chat bots like Eliza and do basic pattern matching and respond with some proper response. And I am surprised that Chatgpt was so far advanced from Eliza. Just thinking, I guess in software, there is still a difference between an amateur developer playing around, student doing programming, open source projects and major companies doing development. For example, MS may add AI into their OS in future. Can't imagine Linux will do that anytime soon or Ubuntu really not the Linux kernel. I would like to be on those discussions if they are though.

But anyways, ChatGPT is interesting. I am still not that impressed for the reasons I will list later. But a good project, it is not like I came up. And it could be a disruptor if used the right way. I don't think it is the internet. When I got on the Internet in 1995? with netscape, I didn't necessarily had to use, but I was pretty addicted and if you to 2000, 2010, we had to use, no other options essentially. Chat gpt, I use about once a week I guess because I still search stuff on google. Even then I use chatgpt for 10 minutes. Basically I ask it stupid questions I wouldn't get away with on reddit.

I think Chat GPT is deficient in a couple of ways and I will lead into my strong ai question.


  • The hallucinations are still a big deal. Isn't it basically that wrong information can influence Chat gpt. And I guess like fake news, how would it know?

  • Chat GPT prompts go one way, a user has to prompt chat gpt essentially, prompt, prompt with questions.

  • The responses are canned and not really changing much, you ask 100 times, get about same response. You ask a human the same question, it may change from day to day, which is fine. And maybe the answer for chat gpt do change over time, but not like a human would

  • I still think chatgpt is advanced transformer, language model, doesn't really think. I still think about really good chat bot, like eliza with a large database. Which is basically google, google has probably a larger database but not no real language there.

  • And let's not discount the technolgies that have been out there. Google, Internet Search, Alexa, Wolfram Alpha, Siri. I am surprised like google didn't come out there first. I guess they are always search focused and advertising. Really I think Alexa was the first big chat bot, maybe Siri, siri just always seemed annoying to me for some reason.

  • It is not like... this thing is bad technology. And I wonder in 5 years, where it will go. I think the big buzz is over and people will use it like they do now. Writing and summarizing documents, search , researching things. Customer service maybe, better help chat bots. Improving canned algorithmic software code.

Here is what I think on strong AI, and a question. Are there good papers on strong ai, the kind of psychology driven AI, human like AI.

  • So Chat GPT can do the language and seems life like with the canned questions and responses.

  • So the closest amateur research thing I did. I was really interested in the low level, bottom up AI and evolutionary algorithms. My thought was, I would replicate earth like beings. If you think about it, we share a lot of dna with a lot of smart animals, like dolphins, dogs, pigs, monkeys, ravens. They are all smart if you think about . I just got a dog. And I pour dog water and I wondered, how does it know to drink the water. And I think it is the same way we do. The dogs parents probably drank water and then they learned to drink water and how to find it. So when I pour the water or hear it splash into the container, they can probably assume this is water and they should drink it. I still dont think it was smell but anyway... I was interested in low level AI, not just dogs or dolphins, but the evolutionary process from amoeba life to humans. And if you really go back, you have to understand the environment we live in too, so almost have to go back 4 billion years and replicate all that in some time simulation. I think it is a cool idea but computationally impossible. And I looked artificial chemistry research, evolutionary biology, cellular automata, stuff like that. I think it was cool, because the dynamic nature generated interested results.

  • Anyway, taking that above, I thought the key to AI was understanding early life. I am done with that but I still think we need to look at the nature of animals and how that leads into Strong AI. So think about humans, and not all humans are smart. Chat gpt is probably smarter than 95% of them certain areas. For example you can ask the fast food restaurant person to analyze markets from 10 years ago. Chat GPT can. But at the same time, why would Chat GPT analyze markets anyway. Maybe the fast food worker, does look at stocks in their free time and maybe 10 years later, they could make some side money with basic knowledge on it. And then maybe come back with basic ideas on how they do that. But here is the question, the fast food worker is motivated to look into stocks because they have some "desire" to do so for their family or friends or to enrich themselves. Why would chatgpt do that? It couldn't unless you prompt it and just happened to have that knowledge. Will chatgpt do this?

  • Like I said, I wont back to the amoeba. But I am curious what motivates a dog to drink water. What motivates people to want things?

  • The other thing I hate about things like Chatgpt and canned tools. And really even AI in games. I have always wanted bad AI? Even I create an AI or game or something. I want a bad actor, a bad AI. Just like we have bad people and google people. We need bad AI and good AI. I really think Open AI should create an AI that actually steals, cheats and lies to people. They wont, but then then create an cop AI. Better goverance in their AI system will function better when their is some bad actor to protect against. As it stands now, we basically expect chatgpt to always act perfectly and lawfully.

  • Basically current AI is not strong AI, it has no desire, no issues, no depression, no anxiety, no fear. And from that, maybe not ability to respond to bad situations unless we program it. Current AI does not act organically or dynmaically or adapt as well as humans or other animals. AI has no memory because it doesnt need one. Our fears, sometimes drive our memory to avoid certain things, AI doesn't do that.


Anyway, will chat gpt go there? When? How long? Is there research on this? Does it make sense at all?

Sorry I am not an academic so my terms may be off here.

Comments

Popular posts from this blog

Random thoughts on AI

Getting Numenta htm.java to run

My most recent AI chat posts and prompts