bubblyworld 9 hours ago

My favourite trivia about "On Computable Numbers" is that Alan Turing got the definition of computable reals wrong! The way he defines them - namely that a computable real is a Turing machine that generates the sequence of digits of the real - has severe issues, because addition and other basic operations are incomputable due to subtle diagonalisation arguments (see https://jdh.hamkins.org/alan-turing-on-computable-numbers/ for instance).

It's interesting that Turing didn't realise this, as the whole paper revolves around diagonalisation arguments in a few places.

The "modern" take is to define a computable real to be a program that takes an epsilon as input and returns a rational number within that epsilon from the real being represented. Or something similar - in this case it is very simple to define addition, for instance, as you have control over these bounds.

One more fact - equality is incomputable for both of these representations! There's no program that can uniformly decide whether two computable reals are the same, no matter how you cook up the definition. This one maybe isn't too surprising - in some sense no matter how many digits or bounds you check between two reals you can never be sure there isn't a smaller gap between them you haven't gotten to yet.

  • a57721 8 hours ago

    Turing's definition of computable reals can't be wrong, because listing decimal digits is equivalent to giving rational approximations. The problem here is a confusion between real numbers and Turing machines that compute them.

    • bubblyworld 7 hours ago

      They are logically equivalent definitions (they determine the same set of computable reals) but they are not equivalent from the perspective of computability. As I said, with the first definition you cannot compute addition, whereas with the second you can. I highly recommend reading the blog post I linked which proves this - the issues involve self-reference and are quite subtle (like much of computability theory!).

      This has nothing to do with such a "confusion", by the way, which is obvious to anyone writing about this stuff (including me). Rather it's the difference between something being logically definable and computable, which Turing himself writes about in his paper.

      Wrong is a judgement call, of course, but given that we're talking about computability here I think it's fair to call Turing's definition flawed. There's a reason nobody uses it in computable analysis.

fen11 13 hours ago

> In 1936 both Alan Turing and Alonzo Church independently reached the conclusion, using different methods, that the answer is "no."

I think claiming these efforts were independent is misleading. Church was Turing’s PhD advisor in 1936, and one of the appendices of Turing’s thesis retroactively justifies the lambda calculus definition of computing, by proving it equivalent to his Turing machine definition. That sounds less like competing definitions of computability to me, and instead a story of collaboration to produce a philosophical justification (Turing machines) for Church’s pure mathematical theory (lambda calculus). Which is not to disparage Turing either: creating this philosophical justification was an impressive and fundamentally important achievement. I think Church sometimes gets unfairly maligned in these discussions as an out-of-touch purist who didn’t care to come up with a believable definition, but clearly he cared enough to support his student working on the problem!

Is there some evidence for the independence or competitiveness of their work that I’ve missed?

Anyway, apart from this nitpick about the introduction, I appreciate seeing this foundational stuff explained clearly!

  • jarpschop 11 hours ago

    I don't think Turing knew about the existence of the lambda calculus when he wrote the paper. It was later that he decided to study his PhD under Church and move to Princeton.

    • samwho 7 hours ago

      This is correct, they were independent works. It’s spoken about in “The Annotated Turing” by Charles Petzold.

samwho 2 days ago

I’m the author of this post! <3

Happy to answer any questions you might have, and love to hear feedback good and bad.

thih9 7 hours ago

> You're telling me that everything can be computed with just these 5 instructions? Word processors, YouTube, video games, all of it?

Note the “computed” - this is about computing the results, there’s no mention of IO or UI.

E.g. a Turing complete language might still not allow you to build a word processor or YouTube; you could compute the color of every pixel, but you’d still need a way to display it.

  • Codrus 2 hours ago

    I/O is via tape. UI is reading the tape after the computer has written to it.

bob1029 19 hours ago

I've been playing around with interpreted machines quite a bit for genetic programming experiments. My current favorite minimal machine has 4 instructions:

0b00: Increment memory pointer (wraps)

0b01: Increment memory value (wraps)

0b10: Jump marker - Jump to next marker if memory value 1, prior if 2 (wraps)

0b11: Output current memory value

It has 2 tapes, instruction & memory. Memory is an array of byte.

This cannot handle any kind of input, but is useful for generating programs that can. Being able to pack 32 instructions into each interpreter machine word opens up some interesting possibilities for how we search the space.

qbane 7 hours ago

An essential point that the article did not address is that Turing machines can be efficiently encoded and be simulated by another Turing machine. This is the real power that causes the impossibility to solve its own halting problem.

layer8 19 hours ago

> By the end of this post, you will know:

> • What can and cannot be computed.

I don’t think it delivered on the “can” part. (And I don’t think we really know that well.)

  • voidhorse 14 hours ago

    Unless I am missing something, as far as the work of Turing and Church is concerned, their answers are actually quite clear on this question of "can", namely the class of partial recursive functions. This is precisely what a Turing machine can compute. It is also possible to build up to these functions in a concrete way using the simpler class of primitive recursive functions. Rogers book on recursive functions and computability is a great reference on this topic.

  • samwho 19 hours ago

    When I was working on this post I found the way that Turing defined “computable numbers” I bit unsatisfying as well.

    What would you suggest I do to deliver on this?

    • layer8 18 hours ago

      I would suggest to not promise it. ;)

      Maybe rather:

      • That some truths cannot be computed.

      • samwho 18 hours ago

        I’m not sure I understand. I do explain what can be computed, don’t I?

        > Something is said to be "computable" if there exists an algorithm that can get from the given input to the expected output. For example, adding together 2 integers is computable.

        I could probably have dug into some of the restrictions, like how it has to be a finite number of steps.

        • layer8 18 hours ago

          This defines the term “computable”, but it doesn’t give you a sense of what things can be computed. Defining a term is entirely different from the above promise.

          At the level you’re describing Turing machines, it’s also not clear that readers would have a precise notion of what an algorithm is. At no point (unless I missed it) do you explain that any algorithm is supposed to be implementable as a Turing machine, or the assumption of what is otherwise known as the Church–Turing thesis.

          • samwho 18 hours ago

            That’s true, there’s no explicit definition of what an algorithm is. I may revisit this. Thank you for the feedback.

VitoVan 15 hours ago

Thank you, finally I got a chance to understand this concept, nice writing, nice sketches.

mrymd 2 days ago

dope

  • samwho 20 hours ago

    Thank you <3

zokier 19 hours ago

I'd argue the key difference between Turing machines and real-world computers is that computers do IO, respond to events, have real-time characteristics, and are interactive. All of these are key features in making computers useful, but are poorly (if at all) captured by Turing machine model. It's one thing to compute something, and another thing to play Doom.

  • bjornsing 13 hours ago

    The key difference between Turing machines and real-world computers is that Turing machines have infinite memory. A real-world computer is thus not really a Turing machine, but rather a finite state machine.

  • layer8 18 hours ago

    You can have your I/O and Doom graphics on the Turing tape no problem.

    What’s different is that accessing an arbitrary position on the tape isn’t O(1), like normally assumed for memory, On the other hand, memory (and address space) on real-world computers is finite, so you can always find a large-enough constant to make the Turing equivalent O(1) again.

  • mikewarot 18 hours ago

    There's an almost Turing equivalent mechanism that has no program counter and runs everything in parallel. I call it a BitGrid (it's my hobby horse). Because you can operate on all of the bits at the same time, you can get deterministic real time performance.

    It's a systolic array of Look Up Tables, 4 bits in, 4 bits out, with a latch on each latched.

  • mithametacs 18 hours ago

    I/O is just built on top of the tape. Real computers reserve a map of memory to I/O.

    This is like saying a CPU isn't a computer. It's sort of right but sort of wrong, you know?