Questions: Leonine Contracts, Illusory Promise, Resurrective Eidolons, and Intentional Communities

I might be jumping the Trope-a-Day queue a bit, but do the eldrae recognize the validity of the concept of a Leonine Contract?

In particular, how would they analyze the situation in the Chesterton quote at the top?

Well, fundamentally in ethics, there ain’t no such thing as a Leonine Contract in that sense.

(I say “in that sense” because there are fraud, coercion, and things that look like contracts but aren’t1, none of which count, along with mixed forms like good old Vaderian “altering the bargain”, some of which are classed with leonine contracts even though they aren’t, technically speaking.

Most relevantly, though, there’s no doctrine of unconscionability – i.e., the notion that a contract is unenforceable because no reasonable or informed person would otherwise agree to it – on the grounds that all people legally competent to sign contracts are by definition reasonable persons capable of informing themselves, which classifies those who do not inform themselves as bloody stupid2. And inasmuch as the Empire has a social policy on that sort of thing, it’s to not protect people against the consequences of Being Bloody Stupid, because that’s how you end up with a polity full of helpless, dependent chumps.)

But leaving aside all such instrumental considerations, the fundamental ethical reason why there ain’t no such thing as a leonine contract is that the concept of one necessarily implies that you can compel the service of other sophonts (or their property – say, their food – which is part of them by the principle of el daráv valté eloé có-sa dal) without their informed consent and no, just no, even if you are starving. Not even a step down that road of treating sophs as instrumentalities. That’s how mutual-slave-states end up rationalizing all their bullshit. So not happening.

That being said, in the latter situation given in the aforementioned Chesterton quote, what an Imperial citizen-shareholder trying that one might run into are the Altruism Statutes, which are basically the statute law backing up Article V (Responsibilities of the Citizen-Shareholder), para. 4 of the Imperial Charter:

Responsibility of Common Defense: Inasmuch as the Empire guarantees to its citizen-shareholders the right to, and the means for, the common defense, each citizen-shareholder of the Empire is amenable to and accepts the responsibility of participating in the common defense; to defend other citizen-shareholders when and wheresoever it may be necessary; as part of the citizen militia and severally from it to defend the Empire, and its people wholly or severally, when they are threatened, whether by ill deed or cataclysm of nature; and to value and preserve the rich heritage of our ancestors and our cultures both common and disparate.

…which makes doing so in itself a [criminal] breach of their sovereign services contract, belike, because they voluntarily obligated themselves in the matter.

(Although I should also make it clear that someone rescuing you from a situation they themselves did not create is owed recompense by the principle of mélith. If you value your life (which people who are still alive presumptively do), you owe the one who preserved it in due proportion.)

Plus, of course, this sort of thing is basically fuelling your extremely unenlightened self-interest with a giant pile of burning reputational capital, which apart from being bad for you in general, is likely to be particularly bad for you the next time you require the volunteered assistance of your fellow sophs…

Given the central place sacredness of contract has in Imperial society, what do Imperial law and eldraeic ethics have to say about illusory promise?

(And as a follow-on, even if there aren’t any legal, moral, or ethical obstacles as such, what will the neighbors tend to think of someone who’s constantly hedging their bets by resorting to them whenever they try to enter into a contract with someone else?)

Well, the first thing I should say is that there are far fewer examples of it under Imperial contract law than under most Earthly regimes I am familiar with. The obvious example that constitutes a lot of it is “lack of consideration” *here* – whereas Imperial contract law, being based on the ancient-era laws and customs of oaths, doesn’t require consideration at all, and simple promissory statements to the effect of “I promise to give you one thousand esteyn” are legally binding in a way that “I promise to give you one thousand dollars” isn’t.

Of the remainder, some things are similar (the Curial courts will impute meaning on the basis that everyone is assumed to be acting in good faith, for example, and a contract to which one does not agree – the website terms and conditions changed without notification, say – is no contract at all, as mentioned above.) But in other cases – say, the promise of the proceeds of the promisor’s business activities, where the promisee doesn’t specify any particular activities and thus leaves open the option of ‘none’ – the Curial courts will point out that that is a completely legitimate outcome within the contract and so there’s no cause for action. Read more carefully next time.

So far as people who try to deliberately play the sneaky-weasel with this sort of thing – I refer you to my above comments about unenlightened self-interest and giant piles of burning reputational capital. Getting a reputation for doing this sort of thing without a damn good reason for so doing, preferably explained up-front, tends to rapidly leave a businesssoph without anyone to do business with…

Is it possible, even after the loss of a particular personality pattern in death, for a “close enough” pattern as to be effectively identical to the original person to be forensically reconstructed from secondhand sources (such as archived surveillance footage, life logs, individual cached memories and sense-experiences, and the like)?

Theoretically, you could make an eidolon (technical term for a mind-emulating AI based on memetic analysis) that would meet that standard – which is what makes them useful for modeling purposes – then uplift it to sophoncy; but in practice, “effectively identical” would require the kind of perfect information that you aren’t going to be able to reconstruct from the outside. The butterfly effect is in full play, minds being the chaotic systems they are, especially when you’re trying for sophont fidelity (which is much harder than just making a Kim Jong Un eidolon good enough for political modeling): you miss one insignificant-looking childhood incident in your reconstruction and it swings personality development off in a wildly different direction, sort of thing.

And it certainly wouldn’t qualify for legal purposes, since the internal structure of that kind of AI system doesn’t look anything like a bio-origin mind-state.

In split-brain scenarios, would each half of the brain be considered a separate, independent mind (regardless of whether or not they’re the same person) under Imperial law?

That depends. It’s not strictly speaking a binary state – and given the number of Fusions around of different topologies and making use of various kinds of gnostic nets, there is pretty extensive legality around this. The short answer is “it depends approximately on how much executive function is shared between the halves, much as identity depends on how much of the total mind-state is shared”.

Someone who has undergone a complete callosotomy is clearly manifesting distinct executive functions (after all, communication between the hemispheres is limited to a small number of subcortical pathways), and as such is likely to be regarded as two cohabiting individuals (forks of the pre-op self) by Imperial law.

And if they do eventually diverge into independent personalities (or originated as such upon the organism’s conception — say, if it began life as a single body with two separate brains with minimal cross-communication), what are the implications for contract law and property ownership?

That’s pretty much by standard rules. In the split-brain case, you’ve effectively forked, and those rules apply: property is jointly owned (with various default rules in re what is and is not individually alienable) and all forks are jointly and severally liable for the obligation of contracts until and unless they diverge.

In the polysapic (originating that way naturally) case, or the post-divergence case, they’re legally separate individuals who just happen to be walking around in the same ‘shell; ownership and contracts apply to them separately. That this sets up a large number of potential scenarios which are likely to be a pain in the ass to resolve should be sufficient incentive not to pursue this way of life unless both of you can coordinate really well with each other.

Could one mind ever possibly evict another?

Only if the other signed over his half of the legal title to the body to the one, which would probably be a really bad idea if he wasn’t planning to depart forthwith anyway.

Are there any particularly good examples of successful intentional communities in the Associated Worlds?

(Not including the Empire itself, even if it counts on a technicality; looking for more things on the smaller end of the scale.)

Oh, there’s lots of ’em, at least if you allow for a rather broader scope of purposes than the Wikipedia article would suggest. Within the Empire, the most successful example would be the metavillage or metahabitat phenomenon, which is exactly what it says on the tin – a village or hab designed specifically to appeal to people with common interests, and to memetically, architecturally, functionally, etc., synergize with those interests: a writer community will have large libraries, many coffee shops, plentiful sources of inspiration, and lots of quiet walks and nice places to sit and write, for example. A space enthusiast community might even have a community launchpad! And the lifestyle is spreading elsewhere, too.

There’s also the First Distributed Exclavine Republic, which again, is exactly what it says on the tin. Planned habitats designed to Imperial social norms scattered all over the Worlds. And then there’s the various monasteries, retreats, and the like of the Flamic church.

I haven’t a huge number documented elsewhere in the Worlds – and in any case wish to save the ones I have for spoiler-free future use – but there are a lot of them. Remember the Microstatic Commission and its thousands of tiny freeholds? Well, those tend to exist because of the ease of anyone with some idea they want to build a community around being able to launch a hab into some chunk of unclaimed space and set one up. They’re very popular ideas in this particular future, both affiliated with larger polities and entirely independent.

Footnotes:

1. The obvious thing here being software EULAs and other such instruments which you don’t get to read before implicitly consenting to. The general reaction of a Curial court to that sort of thing is “haha no”.

2. Which is why the law does permit contracts – like, say, many of *here*’s credit card agreements – that permit one party to unilaterally alter the terms, provided you give your informed consent to them as per normal.

Granted, it is also widely held *there* that no-one capable of anything resembling functional cognition would ever sign such a thing, so it’s not like they show up very often.

 


This is a companion discussion topic for the original entry at https://eldraeverse.com/2016/07/02/questions-leonine-contracts-illusory-promise-resurrective-eidolons-and-intentional-communities

Comments migrated from WordPress:

That reminds me of a set of questions I’ve posted on the unofficial Discord a few months ago, which I think would be relevant for this paragraph, and which I’m consequently reposting here…

  1. Do the eldrae have an equivalent to what human psychology calls “plurality”? Either in the traumagenic aka DID/OSDD sense, or in the parogenic/“tulpamancy” sense.
    [I imagine that the former had mostly been purged away by psychosurgery, and/or by trauma of this level being extremely rare in the first place, but I’m unconvinced about the latter.]
  2. If this is indeed a thing, would an eldrae who had developed plurality end up considered to be multiple sophs in one body? If yes, what does this mean for legal status (e.g. citizen-shareholdership)? If no, why?
  3. With mind-transfer (and, to a lesser extent, mind editing) being a mature technology in the Empire, would it be possible/easy for such alters/headmates (if, of course, such exist) to get separated away to their own bodies?

[Vaguely inspired by the Voidskipperverse, a very-vaguely-similar setting where mindcast-separation of newly created alters had ended up as the default form of reproduction, at least outside the most regressive enclaves.]

Well, now, this gets complicated, and may well be spread out across multiple posts as I take time to go back and examine things I have said previously on the topic. But, to reply in many parts:


Firstly, if you haven’t recently read Worldbuilding: Theory of Mind you may wish to do so now. It is your rich vein of fictive cognitive science explaining just how minds work in the 'verse. (Also recommended reading: Queen of Angels, Aristoi, etc.)


Secondly, it is very definitely the case that traumagenic DID, is Not A Thing among Imperials.

(The most colorful reason for this is that inheriting the voice of a god-dragon snarling “MY WILL BE DONE” in the back of your brain rather does mean that long before you can beat or browbeat one into submission or brain-breakage, you will hit the point at which your target will rip out your intestines and wear them as a soft-meat tiara. But I digress.)

But of the primary reasons, there are two. The first is that the Citizen Eugenics Board and the Eupraxic Collegium between them have spent a long, long time Building Better Brains​:trade_mark: into a wonder of antifragile resilience. Most attention goes to the flashier things built into the alpha baseline, but wiping out depression, phobias, panic reactions, detrimental shock, addiction, incapacitating pain, overpowering motivational emotions, self-deception, etc., etc. is just the tip of the iceberg in building the foundations of mental stability.

This all for the simple reason that superempowering technology is, well, superempowering, and if you want to be a race of free and ambitious demigods, you cannot allow insanity or lack of self-control. It’s just too dangerous.

And the second is that, well, look at the list of indications in the DSM-5 for dissociative identity disorder:

According to the DSM-5-TR, early childhood trauma, typically starting before 5–6 years of age, places someone at risk of developing dissociative identity disorder. Across diverse geographic regions, 90% of people diagnosed with dissociative identity disorder report experiencing multiple forms of childhood abuse, such as rape, violence, neglect, or severe bullying. Other traumatic childhood experiences that have been reported include painful medical and surgical procedures, war, terrorism, attachment disturbance, natural disaster, cult and occult abuse, loss of a loved one or loved ones, human trafficking, and dysfunctional family dynamics.

Now with the exception of “loss of a loved one or loved ones” (and, frankly, death is a lot more evitable than it used to be, too) and “natural disaster” (which constitutes a series of soluble technical problems), the Empire takes the position that none of these things are things that belong in civilized societies, and as a civilized society, therefore, it will stamp them all out with as much vigor as is required to make sure they stay stamped out.

The Reproductive Statutes exist to prevent unwanted children and unfit parents, along any axis you can think of. A vigorous legal system allows no sparrow to fall.

(And the actual crime rate in the Empire is represented most easily with negative exponents. No-one needs to lock their doors, and you can even have houses without them. Or without walls, for that matter. If you left a large gold brick on the street, the only reason it might not be there tomorrow is that someone handed it in to the local koban on the grounds that you must have lost it. If someone swipes an apple in a street market or deliberately takes someone else’s sandwich out of the office fridge, it’s a big enough anomaly that the news will talk for weeks about this sudden inexplicable outbreak of criminal degeneracy. Hell, a lot of people manage to forget that the nice people in the red coats are there for things other than giving directions, helping out with accidents and medical issues, and rescuing Fluffy from a tree.)

Meanwhile the Transcend, with its prime function of achieving perfect liberty with perfect coordination, is out there doing its damnedest to ensure that people don’t even hurt each other accidentally.

tl;dr it’s not a thing because the catalyzing events aren’t a thing. They take a strong opposing position against Bad Things, and find it both incomprehensible and distasteful that we, well, don’t.


(more will follow)

Here in part two, a note on the Imperial position on neurodivergency.

They’re for it, as long as it’s functional. It’s got to work for you. And this qualification in particular implies that it’s far too important to leave up to that blind idiot god evolution. To this end, a lot of cognitive cycles have been spent reducing mental architecture upgrades to sound engineering practice, such that they can have eidetic memories and lightning calculators without creating idiot savants, enable at-will hyperfocus and improved context-switching without any restrained children screaming because they can’t see their favorite crack in the wall, and have constructed an entire clade designed for enhanced pattern recognition, intuition, and dream-logic without condemning anyone to a life on antipsychotics lest they start believing that the CIA are controlling them through the Grey implants in their teeth.

But emphasis on the phrase sound engineering practice.

Now, look back at that hypothetical explanation for DID in the Worldbuilding: Theory of Mind article, and consider the implications of tulpamancy or other forms of self-pluralization, here.

You’re basically taking your will and smashing from the inside the safety locks, such as nature has endowed you with, that prevent you from being internally possessed or overshadowed by your overgrown mental subroutines, and rolling the dice on it coming out well.

At this point, any competent iatropsychist or psychedesigner will be doing a good impression of a horrified Dr. McCoy confronted by 20th century medicine, and pointing out that this is not good (or safe, or sane) engineering practice, and is about one step in safety above the people who try to reboot their own mind-state by washing down an ayahuasca cocktail with a ketamine chaser, and would you please for sanity’s sake (yours) knock it off lest you summon up that which you can not put down - like, say, Mr. Hyde, vampire monsters from the id, or your fully realized simulation of Baron Vladimir Harkonnen - and which will proceed to wear you around like a hat.

(They spent some time reinforcing those safety locks so that this sort of thing wouldn’t happen accidentally, in fact. The cases they already had were enough.)

Of course, that doesn’t mean there aren’t uses for this sort of technology, but that’s what skillware programs and gnostic overlays and situational subpersonalities and parapersonalities and multithreaded metacortices (full or specialized threads, as you like), muses and quspes and other exoself extensions, etc., are for. The whole sophotechnological toolkit. You can have a full helpful chorus in your head from mere subconscious assistants through Aristoi-style daimones (with the advantage of not needing to train them so extensively) to a copy of yourself to talk to, and all in a nice, safe, well-engineered, documented, controllable manner.

And if you want to reproduce? You can just fork.


(more once again follows)

1 Like

What’s a “quspe”? I don’t find any search results for it.

Quantum Singleton Processor, according to Greg Egan in Schild’s Ladder, Chapter 2; a device that makes your mind a quantum object, approximately

Quasi-Sapient Personal Extension; it’s a type of self-learning exoself plugin, and specifically the type that behaves most like having a helpful ally sharing your brain.


So, having filled in all that background, let’s return to the original questions:

In those senses, not really, for the reasons given above.

You can have a whole lot of threads and agents, some quite chatty, in your mind - notwithstanding actually carrying another mind-state around in a separate partition - as a product of highly sophisticated cognitive engineering, but that isn’t the same thing. And besides, they’re all a part of you. That’s the point. If any of them turn into an independent identity rather than a shard-identity of yours, something has gone wrong with the plan.

(Of course, if you’re prone to running experimental versions of advanced sophotechnology, something may indeed go wrong with the plan.)

It’s possible, but possible does not mean likely. Remember, no mental routine is complete in and of itself; the self emerges from the chorus of interacting routines. You, the alter, need to be able to demonstrate that you and all your supporting routines are, as a body, sufficiently diverged from you, the primary, to constitute a (technically and legally) distinct individual and not merely a glitchy subpersonality. This will involve a lot of time having your mind-state diagrammed and analyzed in depth by iatropsychists and cognitive engineering professionals.

A big godsdamned mess, mostly, seeing as when you fork and/or reproduce you’re supposed to make the appropriate arrangements in advance, especially as your alter - if a distinct individual but not a fully competent individual - may actually be your child in law, the same as clones, pruned forks, accidentally self-aware fictionals, and the like. Hope you’re good at cooperating with yourself.

(And no, they don’t get to be an automatic citizen-shareholder. For one thing, citizen-shares represent an investment in the Empire that covers the attached Citizen’s Dividend; they can’t be split, therefore, without reducing the value of everyone else’s citizen-shares, and that’s not happening. Prospective parents by any mechanism are, by custom, supposed to escrow the cost of purchasing their childrens’ shares in advance.)

Mind-editing is a requirement, and it’s going to require a highly skilled psychedesigner to disentangle the complex processes, emphasis on the plural, of identity in a case like this.

So possible, yes. Easy, no, for values of not easy mostly equal to expensive. Picture the psychedesign equivalent of “I’d like to hire you to divide Excel into two completely separate and independent programs each of which has half (meaning roughly two-thirds) of the functionality of the original, based on these two top-level functions”, and you’re getting the idea.

Putting them into a body afterwards, of course, is a rounding error by comparison.