Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
Been a long way towards it. \o/
And Beyond!
What the hell. What about Type Theory?
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.