Paragravity, tractor beams, and detached nacelle apologia

A Twitter thread of just-awakened posting, which I share for general interest:

Alistair Young on X: “Okay. Since I am sleepy enough and bored enough this morning to walk into threads like this, let’s go. See, I spent a whole bunch of time a while back figuring out how this sort of thing could work for my own writing universe, and now you get it applied here. Aren’t you lucky?” / X

With a couple of minor side notes:

A side-effect of that General Theory of Artificial Gravity, incidentally, is that when the ship you have on the far end of your tractor beam is struggling to get away from you, their engine power is being translated directly into surges in your beam projector.

If they do manage to get away, the most “realistic” reason isn’t in the beam physics or whatever: it’s that the overload safety fuse - which is designed to prevent you from vaporizing expensive Starfleet equipment and leaving an ugly hole in deck 14 - blew.

and

As a further side-note, the best unspoken reason to have features like detached warp nacelles is the same reason to have shiny, polished brightwork on old-time naval vessels.

“If we could spend money on _this_ to look cool, just _imagine_ what we spent on the parts more fundamental to ruining your day.”

Playing the part of well not devil’s advocate but maybe mildly mischievous imp’s advocate wouldn’t a possible justification for having detached nacelles as discussed in sidenote 2 be the same as the logic regarding tractor beams discussed in sidenote 1? As in that while ideally having detached nacelles is pointless in the unlikely case of unforeseen spacetime anomalies, design or manufacturing errors, or enemy action, their might be a malfunction and that malfunction might lead to an explosion and if there is an explosion one of the best defenses against things that go boom is the inverse square law, therefore it makes sense to place things that might potentially go boom as far away from things you want to keep safe as you practically can. Again, mildly mischievous imp’s advocate not devil’s advocate, we all know the real reason for detached warp nacelles is because it looked cool and weird and interesting especially in the 60s when very few people knew what spacecraft should “actually” look like.

Also on an unrelated note wow it’s been over a year since I last posted here. It makes me feel weirdly nostalgic.

Welcome back, friend. And yes, making boomenstuff go away from hoomanstuff was an important concept in 1960’s space travel IRL to the point that casual public observers could notice, since boomenstuff was still going boom with regularity off one of Florida’s more popular beaches. And I’d totally expect Zefram Cochrane to have done just that.

Why’d they still do that two centuries later, that’s the part that confuses me. A Nimitz-class doesn’t have an ejector for the main reactor AFAICT, and we’ve been running fission piles for under one century.

2 Likes

To be fair nuclear reactors rarely explode even if they go prompt critical unless you have spectacularly screwed up the design somewhere along the line, they just meltdown. For things powered by antimatter annihilation there are very few major failure modes that don’t involve explosions.

Detached, in this case, meaning “not physically connected to the ship”, which is only thus far a 32nd century thing (later-season Discovery, probably also showed up in Starfleet Academy, which I haven’t seen).

And yes, welcome back!

Yeah, but that’s because Starfleet builds really shitty warp cores.

Good engineering practice would dictate that you only let as much antimatter into the core as you need to react with the matter (and, given the way warp cores output energy, in the form of plasma streams¹, that’s a lot less than mass-equivalent), and as such if you lose core containment, there’s only a tiny, tiny amount mixed in there with the hot plasma. A warp core breach should be like a fusion reactor containment failure - basically a fizzle, in which the hot plasma wreaks havoc on the delicate machinery on the inside walls of the containment vessel, but isn’t net-energetic enough to pierce it.

(And even if, by some crazy random happenstance, it did, all you’d end up with is an engineering compartment on fire and a bunch of flash-rusted junk inside the core, not your ship exploding.)

tl;dr any warp core problem should be trivially handled by yanking the big red SCRAM handle which physically moves some magnets into antimatter-feed-line-closing-off positions.

That warp core breaches actually do blow the ship up means that there has to be a gawdawful assload² of unreacted antimatter floating around inside the core at all times, which is something that a sane engineer, or even a competent mad engineer, would absolutely never allow.

(Since warp cores only started exploding in the TNG era, I presume that Starfleet Engineering went downhill rapidly in between TOS and then, probably because Scotty was no longer around to beat the stupid out of engineering-track middies.)


  1. Which a sensible design would immediately feed into MHD generators followed by thermal ones in order to turn them into something sensible, like electricity, but hey, if you want to run every freakin’ light-switch on your starship by piping star-hot plasma to it, I can’t actually stop you. Although the Federation Engineer Licensing Board bloody well should.
  2. Admittedly, the definition of antimatter gawdawful assload is somewhere in the milligrams.

Well, it could also be that Starfleet’s running their reactors way too hot (or, equivalently, their nacelles are way too inefficient at gravimetric distortion). The lowest output I’ve seen quoted for a 24th-century warp core is 150 TW. The mass-equivalence of 150 TW is 1.6689 grams, half of which would be regular matter in a 100% efficient warp core.

Since nothing is ever 100% efficient, they must be dumping a gram or two of antimatter per second into the reaction chambers. Even if it’s distributed across massively parallel chambers, that’s a lot of mass flow for a SCRAM to chop in a hurry – without breaching containment, mind you. Even if it is a massively parallel chamber system, it’s not hard to imagine uncontained antimatter breaking down the partitions rather rapidly.

To put it another way, one second’s worth of uncontained and uncatalyzed antimatter flow into a 150 TW warp core generates 35.6 kilotons TNT-equivalent energy, about the scale of the Hiroshima and Nagasaki initiations put together. I think that’s enough to produce the sort of warp-core breach explosions we see depicted, don’t you?

On a side note, flicking through my TNG tech manual suggests that while they start at a more reasonable 10:1 ratio (although still not nearly as matter-biased as my back-of-enveloping suggests), at high speeds they narrow the ratio all the way to 1:1.

Which is an absurdity, since as the power take-off is in the form of energetic plasma (and all they use the actual gamma-photon direct output of the M/AM annihilation for is heating the plasma), the actual efficiency of a warp core running at a 1:1 ratio is 0%. Its entire power output goes directly to waste heat (mostly gamma photons with a smattering of neutrinos)!

Said back-enveloping, relevantly, points out that since the entire energy take-off is in the form of plasma (i.e., they’re not lining the inside of the chamber wall with gammavoltaics or the outside with a thermal jacket hooked to a steam plant), efficiency is maximized by using a lot of matter, such that it can soak up all the energetic gamma photons emitted by the annihilation reaction. Which implies that you’re firing your antideuterium beam into a thick, dense cloud of mag-squished deuterium slush to achieve that.

This, in turn, is relevant to safety concerns, inasmuch as it implies that in the event of a slow SCRAM, there should be plenty of matter in the core to buffer the last flickers of the antimatter reaction - and if you end up violating overpressure contraints, the vacuum of space is just outside for you to vent the resulting energetic plasma into. No need to dump the whole core, just open a dump valve[1].

Mostly, things are inefficient because of failure to capture all of the energy produced, not because of failure to react. (In the case of M/AM annihilation in warp cores, this covers things like the neutrinos produced, not to mention that you’re never going to capture all the gamma-photons.) So while they’re going to have to annihilate slightly more than the theoretical 1.67 g/s to produce 150 TW, it is all going to annihilate.

(Side note: also, we know empirically that warp cores are absurdly efficient, since they sit right there in main engineering where you can walk right up to the core and touch it without your hand and your face instantly being melted off by the waste heat portion of terawatt-scale generation. There basically have to a be a shitton of nines at play for that to be even slightly plausible.)

But anyway, sure, arguendo, they’re tossing 1 g/s into the core, but that’s a flow rate[2]. If the engineers aren’t insane, see last post, they’re still not feeding it in there any faster than it’s annihilating. You can feed 1 g/s into the core and still not have more than 1 μg, say, unreacted in there at any given time, as long as you never let the feed rate exceed the annihilation rate, even without magic dilithium catalysis.

Having a full gram unreacted in there would, or should, require epic well-beyond-Chernobyl levels of gross incompetence on the part of the Starfleet engineers, including deliberately disabling all the safety systems which the aforementioned sane engineers would have implemented to prevent this ever from happening.


I mean, obviously in ST canon they don’t build safe, sane (and consensual?) antimatter reactors, but good gods, y’all. It’s really hard to do it this badly!


  1. Or, rather, let the fully automatic triggered-by-EPS-overpressure-without-need-for-human-or-computer-intervention dump valve do its thing. ↩︎

  2. It’s also a tiny flow rate. (I’m not sure I can even turn my faucets down low enough to get 1 g/s of water.) I don’t believe magnetically valving it or diverting it into a dedicated SCRAM trap would be beyond the competence of anyone who could build a bulk antimatter handling system in the first place. ↩︎

I don’t think your “side note” is as peripheral as you do… in fact, my guess is that it turns the whole issue from my-gods-how-can-you-be-this-stupid into regrettable-downside-of-design-tradeoff.

Yes, there do have to be absolutely insane levels of efficiency in thermal capture/shunting to run a warp core. Which got me to reflect on the depicted history of (human) warp core development.

Let’s deal with something that would not have occurred to mid-60’s sci-fi writers, but sticks out like a sore thumb after a half century of real-world experience. ST warp vessels have no visible radiators.

Never mind the waste heat from the warp core; the life support environment alone demands waste heat removal on the megawatt scale. They’re clearly not using high-surface-area passive radiative transfer. So what do they use?

Consider also their design constraints, especially early on. The difference between the main engineering spaces of the NX-01 and the NCC-1701-D is telling, particularly in the core volume/shirtsleeve volume ratio. Pre-Federation design needed a warp core that would fit in a space highly limited by warp field geometry, yet was accessible for monitoring and maintenance. The long design cycle to produce the Warp Five propulsion stack speaks to the level of optimization required. (I am not indulging the technical and cultural paternalism of High-Command-Era Vulcan, given all the evidence available. Hard problems are hard, and you don’t learn how to do your own designs if you don’t solve them your own way.)

EVAs with warp engaged are highly inadvisable, and even in flat spacetime, conditions outside the hull are too unpredictable. And the geometry of warp fields dictates that you can’t put the reactor out on a really long cable or truss, the way a hot-soup lighthugger or a sail-rigged starwisp would.

The hint is in the original terminology. Gravimetric Field Displacement Manifold. If the warp core were merely a EPS source, with the nacelles doing all the folding and spindling, it wouldn’t be gravimetric anything. Something about warping spacetime is integral to the reactor itself; nacelles exist to establish the warp geometry just as frames do on the Imperial frameslip drive.

Now the production of the gravimetric effects is the “here a miracle occurs” part, but I propose that the thermodynamic and gravimetric paradoxes mostly cancel. To wit: some gravimetric effect happening inside the core is the secret to the inexplicable thermal efficiency, invisible entropic waste venting, and paradoxical fuel intermix ratios alike.

The reactor doesn’t have absurd heat superconductive capacity; it’s using gravimetrics to boost heat conduction and redirect particles from structure to output.

The ships do radiate their waste heat, but it’s channeled to the exterior of the warp bubble, where one may cheat outrageously on the subject of effective “surface area”.

And the fuel ratio issues are due to maintaining that internal gravimetric shielding effect as output rises, rather than anything about EPS production.

And now warp core breaches make sense, because they’re not a direct effect of antimatter containment failure, which is at most a side effect of the real failure: collapse of the gravimetric shield/shunt effect. It doesn’t matter if you SCRAM the M/AM feeds if the problem comes from a crapton of thermal and particle radiation suddenly not being removed. SCRAM within a microsecond; you’re still looking at 150 MJ uncontrolled kinetic energy sitting next to your M/AM containment system. Parallelizing just makes the problem worse; creating more containment infrastructure to be warped, irradiated, melted, vaporized, and annihilated.

Once established, the paradigm locked itself in. We know multiple attempts at shifting the paradigm were made: spore drive, Excelsior’s transwarp, retrofitting the Borg warp/transwarp system, the Vulcan/Romulan skunkworks project, and so on. Apparently nothing clicked until the Great Burn razed the galaxy’s dilithium-based infrastructure and forced a reset.

1 Like