The Problem That Built an Industry

(ajitem.com)

45 points | by ShaggyHotDog 3 hours ago

7 comments

  • arrsingh 1 hour ago
    Interesting to note right at the start of the article that they sat on a plane next to each other in 1953 but the formal partnership between AA and IBM was not till 1959 - 6 years later! The article makes it look like all this happened magically fast but in reality a reminder that things take time!

    >> is almost mythological. In 1953, C.R. Smith, president of American Airlines, was seated next to R. Blair Smith, an IBM salesman, on a cross-country flight. By the time they landed, the outline of a solution had been sketched. IBM and American Airlines entered a formal development partnership in 1959.

    edit: oh and then the actual system didn't actually go live another 5 years later - in 1964. Over a decade after the two of them sat next to each other.

    Reminder to myself when my potential customers don't sign the deal 5 minutes after my pitch!

  • StilesCrisis 1 hour ago
    "The key insight is [...]. No daemons. No background threads. No connection state persisted in memory between transactions."

    Closed the tab.

    • arrsingh 1 hour ago
      I noticed that too and did roll my eyes as well but I'm glad I kept reading - its actually quite a good article. Maybe the author used an LLM to help do some copy editing but should have probably given it less editorial agency.

      Either way I'm glad I read it and waiting for the other parts of the series. Really curious how to get access to this airline booking data so I can write my own bot to book my flights and deal with all the permutations and combinations to find the best deal.

    • croisillon 1 hour ago
      ironically...

        "That is not coincidence — it is the market discovering the optimal solution to a specific problem. When you see that pattern in your own domain, pay attention to it."
    • cr125rider 1 hour ago
      Can you explain why that’s wrong?
      • defen 1 hour ago
        It's the LLM-generated-text signature.
  • neilv 1 hour ago
    ITA Software integrated with the mainframe network, and was acquired by Google.

    An exec made a public quote that they couldn't have done it if they hadn't used Lisp.

    (Today, the programming language landscape is somewhat more powerful. Rust got some metaprogramming features informed by Lisps, for example, and the team might've been able to slog through that.)

  • cr125rider 1 hour ago
    Can you add RSS to your site? I’d love to follow but can’t.
  • paulnpace 2 hours ago
    > It...handles 50,000 transactions per second with sub-100ms latency on hardware that costs a fraction of an equivalent cloud footprint. It has been doing this for 60 years.

    Eat that, Bitcoin.

    • bombcar 1 hour ago
      50,000 transactions a second is a bunch for humans.

      It’s nothing for even an ancient CPU - let alone our modern marvels that make a Cray 1 cry.

      The key is an extremely well-thought and tested design.

    • andersmurphy 21 minutes ago
      I mean you can easily do 100K TPS on a M1 with sqlite and a dynamic language. With sub 100ms latency.

      People don't do it because it's not fashionable (the cool kids are all on AWS with hundreds of containers, hosting thousands micro services, because that's web scale).

    • buckle8017 1 hour ago
      Ah yes a completely centralized system that scales, who would have thought.

      (For the pedantic, it's not exactly centralized nor federated since each airline treats their view of the world as absolutely correct)

      • arter45 18 minutes ago
        It’s not decentralized either, at least not in the Bitcoin sense of the word. Interactions between participants may be automated but they can ultimately rely on legal contracts and people. IATA is one of those participants, but everyone has to trust IATA in the airline industry because of their role. A decentralized airline system built to avoid trust in a central authority would be pretty different (actually the booking part may be the last of their problems there).

        It probably doesn’t require consensus among all participants (pairwise consensus at every step should be fine), so there is very likely no voting.

        It’s not even permissionless. It’s not like a random company could join this “chain” simply because they can generate a keypair.

        It’s a fundamentally different problem, and it makes sense that the architecture is different.

  • outside1234 1 hour ago
    It is interesting to think how AI will potentially change the dynamics back to this from general purpose software.

    In a world where implementation is free, will we see a return to built for purpose systems like this where we define the inputs and outputs desired and AI builds it from the ground up, completely for purpose?

    • DanielVZ 57 minutes ago
      I was thinking the same sans AI. What other industries require low latency high throughput transactions that haven’t been served yet?
  • zer00eyz 57 minutes ago
    SABRE, is a reminder that things that are well designed just work.

    How many banks and ERP's, how many accounting systems are still running COBOL scripts? (A lot).

    Think about modern web infrastructure and how we deploy...

    cpu -> hypervisor -> vm -> container -> run time -> library code -> your code

    Do we really need to stack all these turtles (abstractions) just to get instructions to a CPU?

    Every one of those layers has offshoots to other abstractions, tools and functionality that only adds to the complexity and convolution. Languages like Rust and Go compiling down to an executable are a step, revisiting how we deploy (the container layer) is probably on the table next... The use case for "serverless" is there (and edge compute), but the costs are still backwards because the software hasn't caught up yet.

    • 01HNNWZ0MV43FF 31 minutes ago
      Library code - This is necessary because some things are best done correctly, just once, and then reused. I am not going to write my own date/time handling code. Or crypto. Or image codecs.

      Run time - This makes development faster. Python, Lua, and Node.js projects can typically test out small changes locally faster than Rust and C++ can recompile. (I say this as a pro Rust user - The link step is so damned slow.)

      Container - This gives you a virtual instance of "apt-get". System package managers can't change, so we abstract over them and reuse working code to fit a new need. I am this very second building something in Docker that would trash my host system if I tried to install the dependencies. It's software that worked great on Ubuntu 22.04, but now I'm on Debian from 2026. Here I am reusing code that works, right?

      VM - Containers aren't a security sandbox. VMs allow multiple tenants to share hardware with relative safety. I didn't panic when the Spectre hacks came out - The cloud hosts handled it at their level. Without VMs, everyone would have to run their own dedicated hardware? Would I be buying a dedicated CPU core for my proof-of-concept app? VMs are the software equivalent of the electrical grid - Instead of everyone over-provisioning with the biggest generator they might ever need, everyone shares every power station. When a transmission line drops, the lights flicker and stay on. It's awe-inspiring once you realize how much work goes into, and how much convenience comes out of, that half-second blip when you _almost_ lose power but don't.

      Hypervisor - A hypervisor just manages the VMs, right?

      Come on. Don't walk gaily up to fences. Most of it's here for a reason.