week #8
The assignment for this week was to revisit a previous assignment but using data channels. Data channels are a feature of webRTC that enables arbitrary data exchange directly between peers, for things beyond just audio and video. Since the data is transferred directly between peers without going through the server (after the initial connection setup), our websites can be more efficient and scalable, reducing server load and bandwidth costs.
I chose to revisit my week #4 assignment where I tried to create an eye detection web program, where everyone who's online could see if the other people are either looking, blinking / closing their eyes, or not there at all. It sort of worked but was very finicky. So this time I started from scratch, and decided to start from something simple and progress step by step. The detection itself is more sophisticated than before, but I still need to see how it works on different people (hopefully we'll get to that in class) and adjust accordingly. The change between states is currently pretty abrupt, even though I've added a buffer to smooth it a little bit, but blinking does happen fast so I think the solution would be a visual one — where the difference between the states is not represented by a rectangle that's either black or white (too high of a contrast), but by a symbol / illustration / image. My previous attempt had simple eye icons that worked pretty well so maybe something of this sort?
While testing I realized I'm not seeing the results I was expecting because I just opened the url multiple times from the same browser, and so for some reason I guess it was counted as the same session? After realizing this I always tested in like 5 different browsers just to make sure. They were all using the same webcam though, so I didn't get to experience this as intended. I hope some time in the near future I can use this as a starting point for a bigger project where the eyes' state is conveyed in a compelling way and maybe offer some kind of interaction as well.