Colab notebooks
Brett Victor's ideas in Inventing on Principle really echoed with me. As I grow older and develop as a creator I am realizing that most of my ideas, or interests really stem from one core "thing". They may end up going in very different directions, but they stem from the same fundamental root. So really I guess I only ever had one good idea and everything else is just takes on that.
It doesn't happen with every project, but it has happened a number of times where I would just get an idea and feel like I have no choice but to do it, like it's not up to me at this point — the idea needs to be brought to life and I must make it so. I think this feeling is quite similar to what Victor was talking about. When this thing happens, it's not like it's very clear to me what I should do, but to put it in Victor's words — I definitely feel strongly about what's right and what's wrong for that concept and it doesn't feel like a matter of opinion, or personal style, but rather it's a clear "yes or no" thing.
That said, I don't know if I know my principal yet. I tried putting it into words but no one definition felt sufficient. I'm honestly relieved because I think I have a lot more to explore and learn before I have "my thing". In a way I almost hope I never get it exactly right, because I feel like it would be too constraining. I'd rather work intuitively and listen to what my concepts want and not be biased by how I perceive myself.
Growing up I always kind of assumed anyone that is older then me, even if just by a year, knows more, is probably smarter and I should listen to them. So with any ideas I had about the world the immediate thought was "well, others probably already thought about this and came to the conclusion it's wrong", or "if this was feasible it would've already happened", or "I'm probably not smart enough to fully understand this and shouldn't even try because I have no chance". It was like this for a very long time and in many ways it still is. But ever so slowly, as I gain more experience and knowledge myself I come to realize that most people don't know shit (sorry), or at least—they're not as sure about it as I thought; And realizing that there's room for silly questions, and that others have not necessarily already thought about what I'm thinking, and that it's worth sharing, because it might allow something new to emerge. And if I feel it I must assume many others feel it too. I guess my point here is that I think both self confidence and actual knowledge play a major role in "Inventing on Principal", and might be one of the biggest obstacles in getting there (at least for myself).
I really liked Wolfram's idea of the Ruliad — something akin a latent space of all possible combinations of computational rules. This clearly stems from his early experiments in cellular automata (I was only familiar with Wolfram because of his Serpinski triangle-like cellular automatons). With CA you always define a set of rules that can only ever generate a finite set of combinations, and then you have to sort of wander through this space of possibilities and basically hope you stumble across something interesting. There's really no guiding principle on what rules would yield a more interesting result—it's a pretty chaotic system in that sense—small changes in the initial rules bring about very different behaviors. But back to the subject of the Ruliad — I was looking for more information on this and found Wolfram's own text about it (this feels related to Godel's incompleteness theorem but in ways I can't fully understand).
It was interesting to see how the concept of the Ruliad also includes within itself sort of an admittance to our own limitedness. Wolfram calls us bounded observers — "We never get to see the full ruliad; we just sample tiny parts of it, parsing them according to our particular methods of perception and analysis". All the rules and languages we came up with to describe the world around us are for the most part derived of what we can perceive. Some explanations we come up with seem to be more inherent to the universe in many of its possible configurations, but none are complete. And so while they make sense to our specific configuration of senses and comprehension, this is by all means not exhaustive. This thinking always makes me think of stuff like quantum physics and entanglement and grey matter — because, and not that I have any actual knowledge of this beyond a Youtube video or two — but, I feel like the currently accepted theories are not entirely convincing, and it may just be that we don't have the right tools or senses to understand this behavior that we can't explain...
Sometimes It's like trying to imagine a color that doesn't exist, but In the same way we learned to measure and harness electromagnetic waves, imagine what else is out there. Some of it, if not most, is probably forever beyond our ability to even detect it. But the idea of the Ruliad suggests its existence nonetheless.
The topic of how much should people know about computational systems has been discussed a lot in the recent years, increasingly more so with the advancement of various LLMs like chatGPT and the like. I don't think there's one right answer to this. I personally like knowing how things work, and probably something like half if not more of the time, my ideas, or interests, stem from knowing or wanting to know more about how things work. They stem from asking "well, but what if we do it like this", or "let's try combining this and that" — this is usually about exploring what's possible and then trying to stretch that a little further, or break it, or use it in an unpredictable way. But I'm the type of person that, when for example, downloading a new app, I have to go through all the menus and buttons and see what's possible. I get that not everyone are like this, and it's valid to just know what you want to do and use whatever tool you have at your disposal to achieve it without needing to know exactly how that works. Sometimes that's not the point. I don't need to know how a transistor works in order to use my computer, but I want to because I find that interesting.
In terms of education I feel like there should always be an abundance of options — people should be able to learn the basics, and to figure out where their interest lies — if they don't find it interesting, then they don't have to go any further. And if they do, then by all means — there should be options to dive deeper. However I do recognize that in the outside world there aren't many options that are somewhere in between. Everyone kind of has to carve out their own path through practice. But does it have to be like this? If you take coding for example — there is an abundance of courses and resources for intro level. There are also many options, like pursuing advanced degrees in computer science, where one would learn about the inner workings of computational systems in depth. But it's so much harder to find your way when you're in between those two things. I guess a lot of people at ITP would put themselves somewhere in that intermediate point — slightly above beginner, but not advanced yet. I think one would need a lot of real-world experience to be in an advanced level, and so one could argue that by not going through the intermediate stage (learning how things work and getting your hands dirty) you cannot get there. So you could say it comes down to what you want to achieve. But personally speaking, I feel like I'm more motivated by curiosity even if it's not always the best thing for me in terms of how I spend my time or efforts. I've learned to accept that and I try to trust that it'll lead somewhere. I learn whatever I need to learn in order to make an idea happen, and many times the idea changes as I understand how things work (and many times the idea only comes from understanding how things work). I guess my own tendency is that of a generalist — knowing a little about a lot (the other side of this is the expert that know a lot about a little). I find almost everything interesting but I don't go in depth in any one particular subject. It's more about making connections for me. But we need, and there's room for, all kinds of people with different interests and different abilities in the world!
This reminds me of Richard Bartle's Taxonomy of Player types:
I will not go into this here as I've written too much already, but I think as with learning and practicing (art, or a profession), we could see resemblance in our tendencies there to those we would have in games (I also think games can and do reveal our tendencies in a special way). From that angle, I'm easily a hard Explorer with a slight lean towards Achievers (I like strategy and puzzles too...). But I don't like skill-based games for the same reason I'm not an expert on anything. I want to see what's possible and to have the time and space to wander around.
Not sure why, but I've noticed I started spending much more time on reading, writing and researching rather than on making. I guess one reason for this is that the technical aspect is getting progressively more complex and I want to understand what I'm doing. But last week and this week it looks like I don't get to do much. I'm struggling a little bit with understanding how these things work.
Starting from playing around with the first examples, not sure what to aim for because I don't know what it's capable of yet. Took a short while to get the first couple of examples to work but they finally did. I was trying to make the third example work but it seemed like there was some drive file required and I wasn't sure what to get from where. I instead tried to make the one it's based on work (inference playground) that also required some installing and Google Drive access (which is slightly worrying but ok).
I'd like to understand how age, or other parameters, are defined in that space. It seems like something predefined, it seems like cars and horses also have some predefined options for a space one could wander around. Are these spaces something that the model was specifically trained on? Like trained to recognize what age people are, and then perform a style transfer from the image we generated? Is that how this works? And also, what defines what's at the edge of the ranges? Like is this based on pixels or on semantics? (Am I mixing everything up?).
I kept trying to run absurd numbers on other commands just to gauge the ranges and then encountered the following error which was actually helpful because it revealed the possible range for that specific parameter
But I changed the numbers and still get this error with a different number so now I'm not so sure what this actually means.
I also tried sort of mixing two of the source images but that didn't really go as expected:
I started having disconnection issues and then the following message popped:
I guess Google Colab is telling me to go home and get some sleep.