I used to enjoy making categories for things. Now, I think it's a waste of time. It's hard to predict how the categories will change, and it's annoying to keep up with the changes. I believe a more effective way is to dump everything in one place and make connections through tags, implicit or explicit.
Wanting to make everything neat and orderly, especially when it comes to organizing files on a computer, probably stems from the fact that file systems are like trees.
I'd like to explore how tree-less file systems can stop limiting us from thinking in terms of folders and subfolders.
Looks like I'm not the only one who has thought about this: https://fsgeek.ca/2019/05/09/graph-file-systems/
Can machine (statistical) learning be thought of as correlation not causation taken to the extreme? With better computers, it's ever easier for scientists to take data and make predictions out of it.
Not that it's wrong, but "correlation not causation" seems like the shortest way to describe it to a random stranger.
GitHub Copilot solves for the easiest part of programming, which is writing the actual code. If that part is hard, it should be fundamentally solved by creating a language at the right level of abstraction. 90% of writing software is solving abstract problems of data and its manipulation. 10% is writing the code once you've figured out how to solve the problem. Copilot solves for the 10%. Not a bad thing, but its value could easily be overestimated.
Over the past 20 years, we've gone from shipping software on CDs to instant updates on the web. Though, as with most things in engineering, it's only a tradeoff. Let's explore why.
When it took months to prepare a release and literally ship it, teams took their time to make sure everything worked as expected. Mileage varied, of course. You can take space shuttle avionics as an extreme example.
I feel like today's equivalent of shipping CDs are apps that require downloading and updating. iPhone apps, browser extensions, things like that. Once it's in the user's hands, you don't have to worry about keeping a server running. It runs on the user's platform. Have you ever been on-call? You ship the bugs away, but when they cause problems, they are also further away from getting fixed.
If there's a problem with Airbnb's booking page, the fix can be deployed as soon as someone is done fixing it, without delays. No need to wait for an update, for a CD in the mail. Wonderful!
The trade-off is that you have the full burden of keeping the infrastructure running.
The more bugs you expect to have, the closer you should be when they need to be fixed.
Anyone who designs a system that escapes this tradeoff could very well have designed the next thing after the web.
“The world is its own best model”
is a catchphrase originated by Rodney Brooks, in describing behavior-based robotics. This is the idea that in order to be versatile and robust, robot behavior should be designed out of simple and direct responses to the world, as opposed to having a robot store and act upon an accurate representation of its environment in its memory bank. The problem with the latter approach is that it a) is really programmatically costly and b) ends up being really brittle (the robot fails to respond dynamically when faced with some new thing that it can’t analyze well).
I think a similar motto can hold in web design:
“The DOM* is its own best model”
- If you have a lot of elements and you need to record some piece of information about each one of them, you could make an array of items with some way to match an item to an element, but storing it using
dataattributes is more straightforward and less error-prone in most cases.
- If you need to make a calculation based on an element’s dimensions in order to set the value of something else, CSS’
calc(usually in combination with CSS variables) is going to be more reliable and less costly in most cases, particularly if it’s something that would otherwise need to be updated on resize, and which instead is handled dynamically by CSS’ variable units (%, vh, vw, etc).
- In tandem with the above bullet, CSS transitions and animations are the best way of doing simple animations 90% of the time.
Taking software rot as an example, it is a well-known phenomenon long documented in the literature of software. Yet it has no commonly-agreed up on solution. Its management is not a topic of discussion in job interviews nor in performance evaluations (for the most part). The contraventions for software rot are not listed in general programming books nor taught in coding boot camps.
I do not believe that software rot is ignored as a topic because no one recognizes its importance. I’m pretty sure it’s ignored because no one feels comfortable giving advice about it, because almost no one has successfully dealt with the long-term requirements-changes and subsystem upgrades that go with solving software rot.
When teaching programming, we sometimes personify computers as pedantic rule-followers: all your instructions have to be stated explicitly, and with perfect syntax, or else the computer will get confused and fail at the task.
I think this reflects one of the greatest lessons that computing has for philosophers. It shows you the precise limits of logic, once you can’t rely on the hardware built into our brains to explain ambiguous concepts like “idea”, “intuition”, “essence”, and so on.
This is absolutely not to say that human cognitive functions and consciousness can’t be manifest in a machine. I think recent achievements in deep learning are proving the former, and the latter is perfectly conceivable to me as well. The point is we can’t appeal to vague notions in explaining how our minds work, and these recent achievements in AI are demonstrating how malleable and logically imprecise a mind has to be in order to cope with the contingency of a natural environment.