• The site has now migrated to Xenforo 2. If you see any issues with the forum operation, please post them in the feedback thread.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

AI, Empathy, and Emotion

Ralyx

Versed in the lewd.
Joined
Jun 1, 2017
Messages
1,199
Likes received
7,028
To anyone tuning in from the blue, this discussion originally spawned as a bit of a derail from a BNHA story over in the NSFW forums. It's pretty decent so far, so go check it out if it strikes your fancy.

The core questions (among others) seems to be: would an A.I. need to experience an emotion to accurately understand it? Would subjecting an A.I. to priority parameters based on human emotion be a good idea?

I would contend that, no, it would not. Considering the fact an A.I. would need to be able to accurately model an emotion first before it could apply it to itself, there is no logical reason why it would gain any additional understanding from a self-application.
 
And on that note: Empathy by knowing the definition of an emotion is not empathy. Empathy is feeling that emotion and knowing how it feels.
Empathy serves as a predictive model for other people's emotional reactions. If an AI can accurately model an emotional reaction and react accordingly, then there is no reason for the AI itself to 'feel' that emotion. Subjecting the AI's own decisions to be contingent on said emotions should not logically increase its modelling capacity.

That's just an A.I. incapable of learning and adapting. If every single part of the A.I. is perfectly programmed to only adhere to that programming, then we're not actually getting a second opinion. We're getting the opinion we programmed it to say. In which case, that's not an actual artificial intelligence. It's just a virtual intelligence with no free will of its own, only the ability to respond how it was programmed to respond.
I'm not sure where you're getting this all-or-nothing idea when it comes to constraints. As a simple analogy, we could have an AI that learns to play chess, yet still can't move a bishop horizontally, because that was a constraint included at the outset. Likewise, a subordinate can be entirely loyal to their boss and yet still offer a differing opinion.

A deterministic system is always predictable, your computational power just dictates how long you wait for results.
Ah, true, that's my bad. I said computational power, but I suppose what I meant was actually memory requirement. If you can't store (and retrieve and edit) information about every element in a system, then you can't perfectly model it. Consequently, a system can never perfectly model a larger system.

While we, as humans, might not be able to observe the exact relationships and reactions, in a deterministic system observed from optimal vantage point every state is bound to a given time, with deviation from expected state being zero.
I'm not sure why you are appealing to an 'optimal vantage point'. That, if anything, seems entirely pointless from our perspective, since we are inherently bound to our own limited vantage point.
 
I'm not sure why you are appealing to an 'optimal vantage point'. That, if anything, seems entirely pointless from our perspective, since we are inherently bound to our own limited vantage point.
Because I realize that the phenomenon I'm pursuing could possibly occur only in very limited circumstances and we were discussing definitions and inherent limitations, not necessarily feasibility.

That means I looked at what is available, what is likely to be available in the near future, and decided it would be insufficient for my purposes. After all, I wanted to try discussing the best possible result, and we're far from that still. So, I expanded the scope a bit, to include everything under local physical laws, with perfect awareness.
 
Human empathy is derived from a gene or complex of genes that causes our brains to grow in such a way as to make us recognize and partly experience the suffering of others, which it can do because that same gene is present in others making them cry to signal the need for such attention. To anthropomorphise the gene, the gene "wants" to make more of itself and "feels bad" when copies of it in other people (that it "assumes" have the gene) die because that "hurts" the gene's chances of reproduction. And so it grows your brain in such a way as to be mind control to make you do the same. That this "just so happens" to be a fairly valid survival strategy is a nice side-effect for us humans back up at the organism level. Some people, through genetics, brain defect, or chemical imbalance, are immune to or don't possess this state, and lack empathy. Others have a narrower scope for it than "all humanity" and can only apply it to people who superficially have their own visual appearance. The neoteny-adoring gene complex that makes you care for babies and adopt puppies is not the same one as for empathy.

For an AI to have the same empathy as humans, it will have to have some module inside it which is an analogue of this gene complex, and which "identifies" the gene complex in humans as "more copies of itself". This module will also have to be of benefit towards the AI's value function or terminal goals, or it will be removed at the AI's first opportunity. The AI will also need the ability to model and predict the emotional reactions of humans, so it can know what the human is feeling to consider the human's situation and respond with the wish for that human to continue propagating the agenda of the empathy gene that it thinks is a copy of its own empathy module.
 
Last edited:

Users who are viewing this thread

Back
Top