On Gay Marriage and the Separation of Church and State

In the discussion surrounding the topic of gay marriage, the separation of church and state is generally viewed as one of the main defences of the pro side of the argument. I, however, tend to think that it is in truth one of the main arguments against it in the legal sphere. The issue is that the concept of marriage currently belongs on some level to both the church and the state, and thus enforcing the separation of church and state requires assigning sole ownership, in other words, sole ability to define/redefine, to one or the other. If we concede that the separation of church and state defends the pro-gay-marriage side, we are effectively conceding that marriage is first and foremost a state matter. Discerning which way the ownership should be assigned requires looking at how the current foot-in-both-worlds state of affairs developed.

Marriage has existed for who knows exactly how long, fundamentally as a religious union, passing down its associated customs and ideas through religion, not government. The government, in turn, used it as a convenient hook on which to hang certain economic and legal institutions. Our government created a specific legal institution and used the same name for it as the religious institution, this I believe was a violation of the separation of church and state… not a serious one in the short term, but it created an avenue for eventual shifts that allowed the state to influence things that should not be in its realm of influence. Now, of course, those shifts are coming to a head, as people look to the state to redefine what marriage means to the culture, when that should not be in its purview in the first place, as our culture is intended to be free.

If we are to get anywhere at resolving this debate, we must begin by reversing the original mistake, untangling the government from any defining authority on the concept of marriage in the culture. I propose we rename the legal institution currently known as “marriage” to some other name, after which extending it to include gay couples will likely receive much less opposition than before. The issue of the definition of “marriage” can then be played out on a cultural stage, rather than a legal one.

Linux Desktop, Package Management, and Commercial Software

Say I’m a company considering porting my program to linux. I find I’m entering a world where everything is streamlined for a different kind of software. The “right” way to install software is through the package manager, but the package manager expects software to come from centralized sources which “package” software developed by others, and to make it worse, different versions of linux use the same software packaged by different people and managed by different package managers. There’s no practical way for me to wedge my way into that system. I have a few choices:

  1. I can take over the work of the package maintainers and set up repositories for all the major distros for my program and provide people with directions for how to add a repository to their package manager. (which isn’t exactly smooth sailing for a non-power-user, and which is also a lot of work for me)
  2. I can provide package file downloads for the major distributions, losing the package managers ability to keep track of updates, but at least that way having a link to a file people can download and then just click in their file browsers to install. (still quite a bit of work, but at least it’s easy for my users)
  3. I can provide a self-extracting installer, completely ignoring the package manager and installing to a directory owned by the user, although by that mechanism I also run the risk of having dependency issues. I don’t mention installers that ignore the package manager and install system-wide… cause that’s a no-no. (fairly easy, but can be frustrating for a user if dependency issues crop up)

All of these solutions have a tendency to build up a lot off cruft somewhere in the system if commercial software becomes seriously used on the linux desktop, which it will have to be for linux to become a serious contender in the desktop market. So, if linux is ever to succeed as a desktop OS, we need to have an elegant, simple, and broadly accepted solution for installing applications (by which I mean programs with minimal integration into the system) which is capable of allowing all of the following:

  1. Unprivileged install to user-owned area, while possibly pulling in dependencies for the entire system
  2. Privileged install to entire system
  3. Dependency integration (one way; apps can depend on OS level stuff, but not the other way around) between it and the OS-level package managers for the different distros
  4. Repository support, which needs to be flexible enough to support many ways of distributing content such as bittorrent (since some applications can be BIG), able to handle large numbers of small repositories well from a use model point of view, and easily integrated into browsers and such so that people can provide “install links” which add the repository and install the package
  5. Installation from physical media which is also easily integrated into current GUIs and such
  6. The ability for programs to integrate with their own update procedures through this system, since some programs cease to function altogether when they are not up to date (clients for certain large network applications, for example) so they need to automatically check that they are up to date when they are run

In order to accomplish this, a few things need to be well defined:

  1. An interface to the OS level package manager that any implementation of this system will use, which will need to include a lexicon of names and common expectations for OS-level software that can be depended on, since names are not the same across distros
  2. A common format for repositories, both on physical media and on the internet

If those 2 standards can be agreed upon, then implementations of such a system can compete for user approval in the usual manner of open source software, while requiring application vendors to supply only a single object to their customers regardless of distro or implementation of this higher-level system.

On calculus, infinitesimals, and infinities

Ever since the first vestiges of calculus were developed, mathematicians have been trying to make logically precise the concept of infinitely small and infinitely large numbers. Our intuitions would go places our logic couldn’t follow, conceptually journeying out into the various orders of infinity and landing back in the finite realm with a correct answer, but we could never say precisely how without being mired in contradictions up to our eyeballs. Finally a logically precise system for talking about these kinds of operations was developed, but it was done by ignoring the way our intuition seems to go about understanding it, and going at it from an entirely different perspective: the limit was born. Limits involve no infinities… instead classifying the behavior of functions arbitrarily near values (or at arbitrarily large ones). While this approach worked, it slaughtered our intuition on the subject of calculus, and nearly all good mathematicians continue to think in terms of infinities and infinitesimals except when they have to nail something down precisely, at which point they set about the cumbersome task of translating their elegant intuitions into cumbersome limit-style statements.

I recently ran across some interesting theory that could change all that. The concept of hyperreal numbers (the set is denoted *R) and hypercomplex numbers (*C) (note that the term “hypercomplex numbers” is also used to refer higher-dimensional sets of numbers of which the complex numbers are a subset, such as quaternions, but that’s not how I’m using it today) is a logically precise and self-consistent formulation of infinitesimals and infinities that allows for the construction of calculus without the use of limits, in what seems a much more intuitive way. The basic concept is to suppose some number ε exists such that for integer n greater than zero, 0 < ε < 1/n. One of the axioms of the construction of the reals prevents this, specifically that any non-empty set with an upper bound must have a least upper bound (well in this case it’s a matter of lower bounds), however using model theory one can prove that assuming the existence of ε does not change the truth of any first order logical statement about the reals. Second order statements on the other hand… are fair game.

The way I think about *R is kind of like base-∞ numbers with real digits, or equivalently, polynomials in ε, though the polynomial model gives somewhat the wrong concept because ε is closer to zero than any real number, and 1/ε is larger than any real number, and thus where a polynomial in a normal variable might be equal to a different polynomial at a specific value for the variable, polynomials in ε are truly separate numbers. Note that these polynomial representations are not always finite in length. For example, ε/(1+ε)=ε-ε²+ε³-…

Now you might be thinking “hold on a second here: ε hasn’t been well enough defined!” In fact it seems the set of values that fit the criteria for ε is infinite. That’s true, but it doesn’t mean ε hasn’t been well enough defined, as it turns out that all possible values of ε are perfectly symmetrical, much as it doesn’t matter which solution to x²=-1 you name i, as there are no absolute distinguishing factors among them, only relative ones.

To illustrate the issues regarding second-order logic, suppose you define an operator a ≈ b which tests if |a-b| is infinitesimal. It is trivial to show that if a  b and b ≈ c then a ≈ c, however if a(n) ≈ a(n+1) for any hyperinteger n then it does not follow that a(0) ≈ a(1/ε). These kinds of things are important to get your mind around in order to avoid making logical mistakes with hyperreal numbers.

The benefits to intuitive thought are really quite amazing. For example, differentiation is as simple as taking st((f(x+ε)-f(x))/ε) (where st(x) is the real number infinitesimally close to x). Integration is accomplished by adding together an infinite number of real values to get an infinite value, then multiplying by an infinitesimal value (or if you prefer, adding together an infinite number of infinitesimals). The dirac delta is a true function on the hyperreals, as opposed to a concept which only exists as a limit of functions, or a measure.

I’m currently considering the kinds of improvements that could be made to differential geometry that could be made through a similar kind of transformation.

The *other* wizard named Harry.

I read through The Dresden Files by Jim Butcher a bit back, and I simply must recommend them here. The best thing about these books is the author’s deep grasp of the nature of the spiritual world, which shows through in everything from the way he explains the nature of magic in his fictional world to the way he portrays the magical/spiritual characters and worlds involved. These books have spawned in me a lot of thinking and conversing with God about whether some of my beliefs about the nature of reality are really well founded or whether I’ve simply gone along with the winds of the cultural assumptions without really thinking about it. All in all, I’ve found the books to be something that God’s used to push me toward’s new truths that I would likely not have gotten to otherwise, being too comfortable where I was. Do prepare yourself for true depictions of the ugliness of both the spiritual world and the human soul, though; Jim Butcher does not hold back. Also, as the books progress, the depth and complexity of spiritual concepts Butcher explores increases, so hold onto your seat!

Since I’m reviewing a series of books here, I suppose I should at least comment on the slightly more mundane details… The prose are well written, the main character (who is also the narrator) is enjoyably witty, the plot is unpredictable but believable, all the characters have fullness and depth (proportional to how much they intersect with the plot of course), the imagined world and its mechanics make sense, and I find the stories overall quite suspenseful. I can’t wait for the next one!

On compositing, APIs, performance, and simplicity

I’ve been thinking about the whole software structure involved in the sharing of video hardware among processes doing accelerated video operations (which I mean in a general sense, from video decoding to 3D). The development of compositing has certainly changed the paradigm of how this is done, and I believe it’s a great improvement. However, it came as a hack that the designers of what it was built on had never considered, much like AJAX, and as a result there are some serious design issues that need to be addressed before it will truly work well. The problem that jumps out to me is that the critical path of video content rendering between it’s origin in a userspace process and its display on the screen has to cross the CPU/GPU barrier 3 times, which is just silly. Allow me to illustrate:

  1. Program(CPU) gives the GPU some data and directions for how to render it into a buffer. (Critical path goes from CPU to GPU)
  2. GPU renders into the buffer and it sits around waiting for the compositor(CPU) to get a scheduling slot. (Critical path goes from GPU to CPU)
  3. Compositor(CPU) tells the GPU to take the buffer and render it as a part of another 3D scene. (Critical path goes from CPU to GPU)
  4. During output of the next frame, the compositor’s output buffer is put on the screen.

Obviously, the compositor process shouldn’t be in the critical path here. Consider if the path looked like this:

  1. Compositor(CPU) responds to some event (such as input), calculates a description (as a function of time) of how to assemble various buffers into a 3D scene, and sends it to the GPU. (Not part of critical path)
  2. Program(CPU) gives the GPU some data and directions for how to render it into a buffer. (Critical path goes from CPU to GPU)
  3. The GPU uses stored descriptions to automatically composite buffers in preparation for the next frame, thereby getting the program’s data to the screen.

This would not only increase performance of composited desktops, but also greatly simplify the problems involved in avoiding tearing while maintaining low latency from generation to display, etc, since the problem wouldn’t be spread across multiple domains anymore, but entirely in the control of the graphics driver writers. Furthermore, compositors wouldn’t use constant CPU time anymore, could be stacked (which has its uses, from a software point of view), and even esoteric things like direct graphics hardware access from inside virtual machines through I/O virtualization don’t require anything particularly unusual on this layer, which I think is one of the marks of a good solution. In order to implement this though, aside from the things that might need to be done at the hardware and driver levels (about which I know very little), one would need a different kind of graphics API for compositors to allow it to send these compositing descriptions to the GPU; simply extending an existing API wouldn’t cut it.

I don’t pretend to know just how difficult this would be, but I do think it’s the right solution.