Week #05
In class I started by quickly prototyping the form and how I thought the red line was going to be attached to the device.
This is the most basic form I could think of and the same one I had sketched in the previous weeks. After seeing a low-fidelity physical version it became very apparent to me that it wasn't working. The primitive boxiness of it felt right on paper and works conceptually but then in real life the angle doesn't feel right — it doesn't feel like a display, and moving the red line required certain positioning of the hands that felt too cumbersome.
I went on to sketching a few other options:
So I went on to quickly prototyping two more forms using cardboard and tape. I used the laser cutter for some of it because it was even faster. I scaled everything to make it fit the cardboard I had — I think the final piece will be somewhat bigger.
I made a first draft of some parts of the code. Unfortunately I can't share a live link at the moment — there are some API keys that I will need to set either through a proxy or on the server side, but either way this is merely a functional website and it's not intended to be viewed by anyone.
What I have so far is the basic structure:
I will continue to improve on this, but having the general structure helps. I can always switch between calculation functions, and send data from multiple sources because the code is mostly flexible and easily scalable. Doing this also helped me see some problematic areas and change how I thought it would work.
I initially planned on using a sentiment analysis model, but upon testing it with real headlines and articles I saw that the results it was outputting weren't good enough and it was generally too literal. I tested some LLMs and got better results (currently using Meta's 7B Llama model). LLMs have some context which helps with calculating the score, but the downside is the output isn't always predictable in terms of structure (although specifically with the mentioned model I've found this to work pretty well). I do have to send an "engineered" prompt along with the data though (this might be a good thing — I could ask for specific context, or give more "fair" instructions), and I would need to implement methods of managing "wrong" answers (i.e. getting a string of words back from the model instead of a float). I'm also not sure if and when I might need to implement some sort of way of managing the load. I'm assuming at some point (thousands of entries?) it might become slow and could potentially crash or freeze.