X button icon

Jasmine Nackash is a multidisciplinary designer and developer intereseted in creating unique and innovative experiences.

Dimensions

Inner Dimensions

I very vividly remember a specific moment — I remember it because I explicitly told myself to remember it at the time (what some would call making a mental note).

I was 7 years old, it was 11pm and I couldn't fall asleep. I went out of bed and was eaves dropping on my parents talking in the balcony. I cannot recall what it was they were talking about, I'm not even sure I was actually listening. I just remember thinking, or rather realizing, that life as I knew it was the very tiny tip of an unfathomably large iceberg. I remember thinking that as I'll grow old this iceberg will become more exposed but that night I realized even for my "old" parents this is not the case. I thought about how even they are "playing this game" and are not seeing the bigger picture. I'm not sure how to explain this feeling honestly. But I remember telling myself something alongs the lines of "I'm 7 years old, it is 11pm. The world outside is small and a lie and world inside is endless and is the only truth. For now keep playing by the rules. Never tell this to mom and dad."

I know this sounds kind of bonkers but I think this was the moment I kind of became aware of my consciousness, and while I don't view it with such dichotomy anymore, I do still carry this feeling with me.

For me, my thoughts are not really tied spatially. I think it's more associative in a way I can't really explain, I guess it's contextual but sometimes it's visual or emotional. Sometimes, when I'm just waking up / falling asleep, I try to sustain this half-asleep state for a bit to think of something specific and then I get to kind of wander around in this "latent" space. It's usually visual, I usually see something that is abstract but also very distinct, and I start focusing on different aspects of it and kind of branch out onto variations on each part. Sometimes its like designing a poster and trying to tweak different color combinations, different compositions, angles, different textures, different moods. The hard part is trying to memorize what I got when I feel like I have something worth bringing to life. It really doesn't always translate that nicely to the physical realm but it sometimes helps me get interesting enough ideas and directions worth exploring. In that sense, now I'm realizing, one could say I sometimes use my brain as an ML model? That's weird. Anyway...

I'm honestly not a big fan of VR. I see the potential but it feels like we're not quite there yet. Also, it makes me dizzy and the headset is too heavy and cumbersome for me. I do however think it could be an interesting medium to explore in terms of interactivity. I don't think it has to mimic "reality" at all, but sure, hyperrealism could be a tool for conveying certain experiences. I think even a book can be immersive if the story is well written, so it shouldn't be about the medium, but about what you're doing with it. If anything, I think the more interesting VR experiences would be those that utilize the medium's "limitations" to leverage more unique interactions and storytelling techniques. Like, don't get around it, use it!

Making

Link to github repo

Link to live site

So I started by setting up one of the examples from the class. It didn't work initially, not sure why. I guess sometimes the request doesn't go through, or something gets lost because this kept happening and then not-happening over and over. Anyway, if it doesn't seem to work try refreshing and giving it a minute? 

I played around and tried different models, even a niche one that generates x-ray images but that didn't really lead me anywhere interesting...

I've been watching this show on Netflix called Kaos — it's a modern twist on greek mythology. The story revolves around the characters' "prophecies" — each person should have a unique one, but it just so happens to be that some of the main characters got the same one. They each have different interpretations of it, and Zeus, the ruler of gods, is determined to defy his. Its always the case with these stories — by trying to defy your fate you end up materializing it. The prophecies are always kind of cryptic and can have multiple interpretations. It made me think of the stories our minds make up to make sense of the world. How we kind of shove everything into what we already know and think of the world. I wanted to make this interface that would allow me to explore the difference a word (a token?) makes, to sort of try and visualize the latent space. Of course I have no way of knowing what the actual parameters are but by changing a word I get slightly different results and in a very small way that's indicative of the way the model works.

So I set out to create a Prophecy Generator! I made up a bunch of parameters that would be joined and sent as a prompt depending on what the user selects and started testing the results:

Four different outputs with only one parameter changed in between

There's so much we can learn from that! For example, "A storm brews" appears for 3 out of 4 prompts, and there's nothing in the parameters that's close to this in meaning. So this token (is this the right term for this?) might be closely related to "greek-style prophecy"?

I decided to also add a second model that'll generate an image based on the prophecy. This took longer than expected... The issue was about the async functions, it had to wait to get the output from one and then make a second request. When it finally worked, it looked like this:

I blurred the image intentionally — I thought it worked better with the concept — it creates an atmosphere but leaves room for interpretation (but if you really want — hovering over the image will sharpen it).

There were a lot of little technical issues to solve, and I must say that as much as copilot was helpful in setting up some basic structures, it wasn't as helpful in connecting everything and solving issues. Maybe that's for the best though... I still like doing some stuff on my own.

I then moved onto designing and refining and ended up with something like this:

Another example of what we get by changing only one parameter (Temporality in this case). Its interesting to see how close all the outputs are. I mean, there should be some level of randomness or chance, but I tested this many times and more often then not I got very similar results by either using the exact same prompt or with subtle changes.

I guess I kind of just made a tool for myself to explore this space, more than an experience anyone would be interested in. I was hoping to go further with this — to maybe generate many images per prophecy, to kind of visualize different interpretations but I didn't get to do that yet. Perhaps my future holds more opportunities to explore ML models...