It’s Called A Guilt COMPLEX For A Reason
Graham suspected there were dozens of movies with premises very similar to this experiment, and was starting to wonder if he should have watched some of them…
20251126
Prompt from DailyPrompt.com
“Good afternoon, Graham!” The board head boomed with perfect professional cheer. “We’re excited to hear the progress on your research.”
A claim somewhat at odds with how every other board member filling the viewing table had blank, “I am certainly paying attention and am not at all bored” expressions. Graham wasn’t sure if enthusiasm would be more intimidating.
He turned away and busied himself setting slides up. Inwardly bracing.
“Well, um, progress has been very good! The EmoBot is now displaying ninety-eight percent of the target emotions, with most tests returning results consistent with human controls.”
Slight nods - though pensive lips and narrowed eyes made it clear they’d homed in on that pesky two percent.
“In addition EmoBot seems to be stable, with self-regulation processes bringing it back to emotional baseline within three minutes of emotion-prompting stimulus ending, well within the target of five minutes, even with novel stimuli.”
That got outright (albeit muted) signals of approval, which was a great comfort going into the BUT part of the presentation.
Graham steeled himself and clicked to the next slide. “Unfortunately, as predicted in the initial psychological modelling phase, guilt has been difficult to induce within the system. We have tried numerous approaches - simulation, peer modelling, Catholicism, direct programming - but as of yet EmoBot has not generated a satisfactory emotional response tied to its ‘remorse’ tagging.”
“So it shows remorse?” A board member interrupted. “Isn’t that the same thing?”
“…No.”
Do not ask if any of them had read the preliminary report. Definitely don’t remark on how many blasted hours formatting the blasted thing had taken him, or how much bitterer the experience had been for the knowledge it would likely be entirely ignored.
Instead Graham clicked through to Appendix 2 and explained “Our system defines ‘remorse’ as EmoBot knowing that the action it performed was considered unacceptable. This is separate to the emotional response. Consider how a human might feel glee while committing what they consider unacceptable acts.”
Board responses ranged from dubious to understanding.
If they had read the preliminary report they’d know that EmoBot was in fact prone to displaying positive emotions when certain members of the team declared an action unacceptable. Which, while technically within the stated goal of “humanlike emotional responses”, was troubling.
In fact, attempting to induce guilt was creating all sorts of strange behaviours. Graham was glad they had a stable backup to revert to. Even if they did manage to get EmoBot to develop guilt, the immediate question was whether they could do so without introducing… eccentricity… into the system.
But none of that was speculation you shared with investors, and thankfully nobody was indicating further disruption. So Graham clicked back to the correct point in the presentation and cleared his throat.
“For the next phase, we are going to attempt more extreme simulated scenarios, designed to trigger strong existing negative emotions in EmoBot, which in combination with targeted neural net prompting should cultivate guilt responses. All tests will be conducted from a stable base template to minimise variables.”
And because the entire team felt it was a terrible idea to get a robot emotionally numb to violence. There were probably a hundred movies with that exact premise, and Graham sometimes wondered if he should have watched some.
By this point, he suspected they would just cost him sleep. Not like any of those writers understood robotics anyways. Right?
He pulled his focus back to the presentation. “We should have an update on progress within the next six months.”
Six weeks for the experiments, three months to desperately try other approaches in hope of positive results, an extra month to account for mundane crises like unexpected staff leave, and a fortnight to throw the blasted report together.
Without giving them a chance to complain or quibble he clicked to the next slide and reiterated how much progress had already been made. Complete with video clips of EmoBot interacting with humans, and a quick demonstration where emotional test results were shown and the board voted on which they thought was EmoBot’s.
Votes were 23% correct, within tolerance of the 25% for random guessing. And far below the team’s 31%. But even that proved EmoBot was performing exceptionally.
For everything except that one last pesky emotion, which unfortunately happened to be one of the “core” requirements in the project grant.
Somewhat ironically, Graham felt, given that the majority of CEOs, board members, and shareholder representatives he’d interacted with got along perfectly fine without displaying any capacity for guilt at all. Funny, that they’d then be adamant it was necessary to prevent an independently functioning agent from doing harm…
He pushed that thought away (hopefully) before any of it showed on his face. “Any questions?”
Please please please let them be about the final section of the presentation, not the middle bit… and definitely not about the preliminary report.
Prompt was “You’ve a scientist who has just developed a robot that can learn and express real emotions. There is one emotion it has not grasped yet: guilt.”