Linearity
Not sure why, but one of the questions that prompted me the most on the assignment itself was "can media help reinforce a thought and make it more frequent within your train of thought?". I think as a designer, so I'm very used to thinking symbolically and noticing small details, if that makes sense. I guess it's something everyone does to some extent, almost (or really) intuitively. Otherwise we would not have been able to develop such a rich human culture. So like choosing one word over the other makes a huge difference even if they're synonyms. Some words are culturally or consciously associated with certain feelings, events, values. So when the media, or really anything we are exposed to, either choose to or just "happen" to repetitively tie two things together (be it two images, two symbols (ideas or images that represent an idea), image + sound, image + word, word + feeling...), soon we will start associating the two together whenever one is brought up. This is a powerful thing because we unfortunately have very little control over the neural pathways that are created in our brains, and it seems to be much harder to undo an established pathway than to create a new one.
In that sense, and to answer another question from the prompt — It think yes, it could be said that the likelihood of a given thought coming one after the next is what defines our personalities. After all, it is quite like (actually is) activating a neural network where the neurons either fire or don't, and we each have our own uniquely trained neural network that is us, with its different weights and layers. I'm thinking about why I like asking some people what they think about a given subject, and I guess it's because their answers are usually surprising to me, like their brains are wired different than mine so they produce different results with the same input. And I think being able to say "Oh! I didn't think about it in that way!" is why we enjoy talking to other people. So in a way, connecting different models (be it AI, or really models of thinking — different human beings) is where new ideas lie. Otherwise its just echo chambers of what the model (or brain) was trained on (or exposed to). But is this really how our brains are? Is every individual mind only capable of thinking through things it was previously exposed to? It feels so convoluted that its a hard question to answer.
There's this famous quote that's (probably falsely) attributed to Einstein that goes: "The definition of a fool is someone who does the same thing over and over again expecting different results". But I don't think that's ever true? Time still goes forward, so nothing is ever the same — everything is unique in the moment it exists in. And it's almost to the contrary when thinking about neural networks (both digital and physical) — because as it turns out, the more you repeat the "thing" — the more persistent it becomes.
So repetition over time is not a redundant behavior — it's used in poetry to emphasize an idea; it's used in learning pretty much anything (that's called practicing!); It's how scientific experiments are conducted; it's how the world works — our planet and solar system are repeatedly spinning again and again; our hearts keep pumping and pumping so that we may live. I might be getting too poetic here but I guess I'm saying that almost any act we consider crucial for "living" involves some kind of repetition over time. Repeatedly doing the same thing over and over again seems to be crucial for life as we know it.
Quick note — I'm writing this as I work because it makes more sense to me. Please kindly take this into account while reading :-)
Github link (a live link won't work without the configuration file of the credentials, sorry).
I realized I'm going to want to use Firebase along with Replicate for my project for Project Development Studio class. I'm not sure how it should work but I think I'll have a database that gets updated in real time with new data coming from an API. The data itself will eventually be in five columns — a string of text, a label and a score given by a sentiment analysis model, a timestamp and a source (just for my records).
I took the last example from class and stripped it to the functionalities I needed. For some reason I can't get through to the sentiment analysis model I'm trying to send the requests to (getting a "The requested resource could not be found.', status: 404" error). If I replace the model with any LLM it does work though. I thought this was because the model I was trying to use is cold, so I went as far as paying for a few runs on the Replicate website to make it warm, but the fetch calls from my script still didn't go through. So I guess it might have something to do with how I'm trying to call the model itself but I've tried all possible combinations of version / modelURL etc — still nothing, and I did get other models to work just fine. Kind of lost at the time of writing this so I've set up office hours to see if we can get to the bottom of this.
In the meantime, I'm using the meta lama model and asking it nicely to perform sentiment analysis, but I would rather use a model that was trained for the job(?). For now it seems to perform pretty well — I'm uploading dummy data as JSON to Firebase and almost immediately the score field gets populated with the result coming in from through Replicate. Eventually the data will be coming in from another API once I sort this part out as well, and the whole database will be used to calculate one number that will be sent to an Arduino. Long way to go!
Edited to add: I changed a couple of things, like how now only a newly added line gets sent to the model and then updated in the database. Right now it's done through an input field so I can easily test it. I'm slowly getting to know the different Firebase functionalities which is important for my bigger-scale goals for this project.
The thing that bugs me is that I don't feel like I'm getting a reliable score from the model. I ran some tests both with the sentiment analysis model (on Replicate itself since I still can't get it to work in code), and with the Meta Lama 7b model. I actually think the LLM might do better because it has more context and doesn't seem to just score based on negative/positive words (it's also way faster), but the downside is that I need to very carefully craft the right prompt, and that I can't trust that 100% of the time it'll give me what I want. But this might be a price worth paying as long as I implement enough measures to make sure it doesn't break anything.
Another issue is that while staying in the same range, the scores can change by up to 15% when being rerun. I will eventually implement a number of ways for balancing and de-biasing the score, but I do need the initial data to be better, fairer...
Edit #2: I decided to run some tests with the NYTimes real-time news API. I will eventually be using more / other sources though — so I didn't go deep into making the NYT one work exactly like I need it to (there are duplicate entries, but that's fine for my tests now).
What's important is that every once in a while (I used setInterval()
and I call for an update every 5 minutes, but that's adjustable...) I get data from an API, it gets sent to the database, then to a ML (through the ITP proxy), and back to the database where a value is updated. It then looks at all the values of all the items and calculates a score (it's just averaging now, but this will change), and displays it (it will eventually send it to an Arduino).