Carrying on with the habit of mentioning projects here before they are finished, I have recently begun writing both books to do with Occam:
As I write this only one chapter of each book has been written but this will no doubt change quite soon.
I usually make a point of reaching the end of a project before adding any mention of it here. That is not quite the case this time around, however. My excuse is that this time the project is a ten year one and I have reached something of a milestone.
The project is Occam, which is a somewhat nebulous term in the sense that it encompasses divers sites, services and applications as well as the theory that underlines them. Rather than try and explain the best part of ten years of work in a few sentences I thought it best just to post a couple of videos to give anyone who is interested the opportunity to get started. The first video is about installing all the software and familiarising yourself with the support sites as well, whilst the second video explains how Occam can support controlled natural language.
I have been badgered several times for documentation relating to all of this so I will likely start writing a couple of books soon. One will be a user manual and the other an apologia for the underlying theory. I hope the above videos will help a little in a meantime.
Update: I have recently come up with an elevator pitch for Occam, which is that it can verify the mathematical output of large language models such as ChatGPT. I tried to back this up with explanations but it is difficult to get it all across succinctly, as I am sure you can imagine. And, at the end of the day, Occam does not support controlled natural languages quite yet, which sounds like more of an excuse than an explanation.
So, prety much on a whim, I thought I would use the grammars sandbox on Occam's website to see if I could at least get some output from ChatGPT to parse. So I asked it for a simple proof and it duly obliged. I then copied the proof into the grammars sandbox and set about manipulating Occam's default Florence vernacular in order to make it more like a controlled natural language. I also created a few custom grammars to handle the types, the group operator, the small amount of logic that the proof required and so on. Here is the result after I massaged the colours a bit. It took just over an hour. Since I had to do everything from scratch I contend that with things set up in advance I could have gotten it done in a few minutes, or even less. I am Occam's creator and have been working on it for the best part of the last ten years but even I found this a bit startling.
There are inevitably some caveats at this stage. Occam does not support complex types with properties yet, for example, so you cannot define a group type with an identity element. Also carriage returns are significant in the default Florence vernacular and you cannot turn this off in the sandbox so the proof does not look as contiguous as it could. Finally, the axiom has a conjunction on the right side of the biconditional which is not taken into account in the proof. If you really wanted the proof to verify then you would have to break it down a little more, but it would muddy things for now and besides I was frankly excited and keen to get it done. Anyway, trust me when I say that with the addition of complex types this would all verify. Here is the parse tree at least:
Funnily enough I then asked ChatGPT a settle a more involved problem and its answer was admirably trite. This illustrates an argument that I have been making lately that placing the emphasis on logical rigour or correct arithmetic calculations is largely beside the point with large language models. What is important is not their ability to reason but to recall. Most if not all of the large language models extant today are trained on significant amounts of data from arxiv.org and so they are all to some extent able to mine it. If I had asked ChatGPT about a more obscure or technical proof, for example, then there is a good chance that it would at least be able to have a stab at it. Taking this further, it is arguable that the reasoning does not or at least should not get any harder as mathematical proofs get more sophisticated, only the details get more intricate. And since Occam can also cope with TeX markup, there is no reason why more sophisticated proofs cannot be copied into Occam and verified with similar ease to the trivial group theory proof that I tried earlier.
Anyone who has not had their head buried in the sand for the past few months will be aware of the furore around the veracity of what these large language models are outputting. And lately people in the tech industry have been using the term verification for attempts to, well, verify these outputs. I think they have lifted this usage straight from the formal reasoning domain and why not? Marc Andreessen recently called the verification thing a trillion dollar problem, for example, although that does seem a bit much.
Anyway, my response up until now has been basically that it is not such a problem for mathematics or similar forms of reasoning because in the future we will be able to verify these outputs. And I have been saying that before the end of the decade these large language models will be outputting mathematics at a research level that needs little or no human or indeed automated intervention before verification. So I guess the point I am making after today's experiment is that, actually, it will not take until the end of the decade. We will be there in a year or so.
I have recently added custom migrations to the following packages and therefore thought that it was about time that I mentioned them:
There is not much more to add here as the readme files say it all.
I should also mention of the following two packages:
I use the Necessary package in nearly all of my projects, the only exceptions being ones where I make a point of not having any dependencies at all. The Sufficient package on the other hand is only used in a few. Still, it provides controller functionality and an overall architecture to those projects in a few lines of code, contrasting with monolithic frameworks that often struggle to provide the same with tens of thousands. I wrote a little about this on the Juxtapose site here.
Update: I have recently been working on Murmuration again, adding a way to generate statements rather than relying on static SQL files. This has worked out surprisingly well and reminds me of the tine I moved over to programmatic styles from SASS, which I wrote about in an earlier update. I am also glad because Murmuration now seems to have a definite purpose. It seemed a bit rudderless before. Anyway, I could go on about it here, but it is probably best to just point you in the direction of the readme file.
Further update: Carrying on from the above update, I have just added a subsection to the readme file explaining how to support promises and the async/await syntax. Again this has worked out surprisingly well.
This is another of those updates that comes with a bit of a health warning. You should probably only read on if you are interested in front-end programming, React in particular. Here is a link to the package that I will be referring to:
React's mixins differ slightly from the above. They are assigned specifically to class instances rather than generally to the classes themselves. This is going to be slower, for a start, moreover it used to be the case that mixins could not be avoided. Either you used a mixin to define a method in certain cases, or you had to make do without. So why did React implement mixins in this way? Take a look at part of Reaction's React.createElement() function here. You can see that the underlying ReactFunctionElement class is instantiated and then passed the function referenced in the JSX. The function itself is not invoked immediately, however. This happens only when the underlying element is mounted. It appears that React adopted a similar pattern in order to support either calling the React.createClass() function or extending the Component class. In both cases a corresponding underlying element class is instantiated and passed a suitable reference, see here for the equivalent in Reaction. It was this dichotomy that seemed to have resulted in the need for React's mixins, which bind functions to instances of these underlying element classes as ooposed to, say, instances of subclasses of the Component class.
Now you could argue that all of the above is a bit unconvincing. After all, in has been possible to define methods in subclasses of the Component class for years. My excuse is I could not get Reaction to work all those years ago without the aforementioned dichotomy and that React's mixins ratified it at the time.
Anyway, recently I have been working on the Open Mathematics site again, which uses Reaction, and have gotten increasingly fed up with React's mixins. So I wondered whether it would be possible to do away with them, as React had done. The obvious solution was to merge the Component class and the underlying ReactComponentElement class into one and then instantiate just the subclasses of the former, see here. And this worked! It worked because it is not the instantiation of the subclass of the Component class that is deferred until mounting, it is always instantiated immediately, but only the invocation of its render() method.
So now you can extend the Component class and define methods in your subclass, just like React. Reaction still does not support diffing, and therefore remains quite a different experience to React, but still.
Here are links to two other packages related to Reaction:
I think that between them they now constitute a viable alternative to the combination of React, Redux and Styled components, although I remain a big fan of the latter.
I have just finished writing a couple of new Easy packages as well as rewriting some the existing ones, so this seems like a good time to list them:
I have also just had another go at the Juxtapose website, so I am shamelessly plugging it again:
I think it conveys Juxtapose a little better although there is, of course, always room for improvement...
I have just made what I hope is a good fist of the Juxtapose website. You can find the link in the websites section below. Here is it again, I guess for emphasis:
As I mentioned in my previous update, I am a big fan of programmatic styles and JSX, which is partly why I have made this renewed effort.
Both the Occam website and the Occam IDE (such as it is) are written with Juxtapose and programmatic styles.
Update: I have also been working on a pretty printer:
Essentially it is a simpler version of the Occam IDE's pretty printer and it incorporates several of the Occam grammar packages. You can see about a dozen instances of it on the Juxtapose website.
You probably should not read this unless you think CSS is interesting.
I have been programming with CSS since the late nineties, and for most of that time I have had the distinct feeling that I did not really know what I was doing. I certainly tried to do a good job, but a glance at the CSS I produced would invariably suggest otherwise. And when I looked at other people's CSS, it appeared that they did not have much of a clue either.
Everyone will have their opinions on why CSS is so intractable. Here is my tuppence worth:
Actually the situation is much better these days. And one thing that I did manage to figure out over the years was that the above deficiencies were not actually the root cause of bad CSS. The problem was what can best be described as an organisational one. The nature of CSS selectors meant that you always ended up with a cat's cradle of interrelations between styles and therefore the way they affected any particular HTML element was often anyone's guess.
Then SASS came along and things got better. I think SASS gave us three invaluable things:
These things can lead to better organised code in a shallow, syntactic sense. Crucially, however, they also encourage and facilitate a more modular approach fundamentally. I eventually came to the following conclusion, in fact:
CSS selectors should not be indicative of and encapsulate any specific collection of properties, and then be applied generally. Instead, they should be used solely to target specific kinds of HTML elements, and should only encapsulate that kind of element's requisite properties.
It took me twenty years to figure this out.
Together with JSX, I think programmatic styles are game changer for front-end programming. There will no doubt always be debate about the architecture of large web applications, but in my opinion the presentation side of things is now a done deal for several years to come.
Here are some new build tools:
None of them are any great shakes, to be honest, but they have brought the build times for my projects down from the often seemingly interminable to the reasonably acceptable.
Update: I have spent the last several months improving these tools. Here is a summary:
The big one here is the support for SWC and ESBuild. I have actually migrated all of my projects over to these tools and the result is that build times have generally improved by a factor of ten or more.
I have finally gotten to grips with eliminating left recursion from BNF. It was all a bit much for a readme file, so I wrote a paper:
The implementation is in Occam's grammar utilities package, a link to which can be found in the resources section.
Recently I learned about 996.ICU. Several months after everybody else, probably. Anyway, it gave me the excuse to run through my projects and refresh their licenses, which I have now done. The standard MIT license that I use for most of these projects is very amenable to the inclusion of the Anti-996 license. See here, for example. I also merged the standard Apache and Anti-996 licenses into one, which can be found here, for example. Feel free to lift either of these licenses for your own projects, obviously.
I decided last year that I should get anything related to Occam's verification into the public domain. With this in mind I have released some new packages concerned with Occam's grammars and its document object model. Specifically:
These links can also be found in the updated resources section.
Update: This one has also been added recently:
Earlier this year I was kindly invited out to France to meet with others interested in formal mathematics. I ended up giving an impromptu talk and have just gotten around to completing the slides that would have gone with it. You can find them in the new presentations section below.
I should take this opportunity to sincerely thank Josef Urban and Henk Barendregt for their interest in Occam.
I have been working on the Open Mathematics website again. You can now make changes to package's readme files and publish them directly from the website. Other than that there is now some support for GitHub. Specifically, you can commit the aforementioned changes to readme files to underlying GitHub repository at the same time as publishing; you can edit and create issues; and you can do the same for comments on issues. There is also support for TeX, which can be added to readme files, issues and comments.
The result is that the website now functions at least partially as a kind of maths enabled, GitHub overlay. You can compare the readme file of the 'minimal-propositional-logic' package across the two sites here and here to get an idea.
There is a new website for publishing Occam packages as well as a command line package management tool to enable you to do so, called Open Mathematics. You can find a link to it in the websites section below.
Last updated 19th September 2022