Hack Jam Log Book is a log of progress made in and around a weekly hack session. Topics include natural language processing, high energy electronics, linguistics, interface design, &c. Enjoy.

Recent Posts:

Archives:

27.12.08

 

Merry Holidays, and Screen Capture Update

The Screen Capture dealio described in my last post is coming along. It presently displays two adjacent colored boxes: one solid foreground colored, and one mostly background colored but flickering to foreground color at a customizable rate of once per n frames; the foreground and background colors are also customizable.

The least n for which the results are meaningful is, of course, 2, which means that during every 2nd frame the box is painted foreground colored while during all other frames the box is background colored. At this rate my naked, unaided eye can infer the foreground color with ease, though it's hard to say whether that is affected by the adjacent solid box -- this will have to be eliminated during future double-blind tests. Unfortunately the rapid cycling introduced some horizontal lines which were very unpleasant to look at and which will have to be dealt with effectively before useful conclusions can be drawn.

Adam and Sam gave helpful insights on what was causing the unpleasant horizontal lines and how to cope with it. Presently the main drawing loop merely increments the frame index counter, wraps it to zero if necessary, and throws up the appropriate frame for the flickering side, and does this as quickly as possible. The horizontal line problem arises when the program writes to the colored box's memory location while that location is being read and updated by the display hardware. In order to get around this it is necessary to interrupt the flow of things and patiently wait for the vertical refresh to complete and then resume.

A vertical refresh of 60Hz means, that 60 times per second the screen will be redrawn, and a higher FPS than that should make it always invisible. I think that because there is no computation or rendering or any complicated graphical processing, only blitting a small square that's already in memory, that it should not be hard to get better than 60 FPS, thereby making the refresh rate, and horizontal lines, appear invisible.

Is this possible? The SDL trick of creating a double-buffered hardware surface and calling SDL_Flip() in the draw loop does not seem to have helped the horizontal line problem. So I'm looking into OpenGL to see if that will afford me the degree of control necessary to eliminate this problem. Sam suggested that because of various factors which I don't recall in their entirety, including but not limited to some things having to do with what the eye can keep together and how quickly the monitor updates, I would only be able to split it into about four frames before the image did not appear whole. Makes sense, if the images are too far apart (in time) then the eye may not unify them. This will of course be tested.

Next up will be separating the target display image into nonoverlapping (and not necessarily continous) chunks, one on each frame. There are combinatorially many ways of splitting a fixed number of pixels onto a fixed number of frames. A few of the ways I'll be looking at are:
  • random pixels: approximately (width*height)/n pixels will be pseudorandomly selected from the image and recorded to a frame with no repetition. Or some repetition but not in every frame? These are variables that will have to be examined for best output.
  • "checkerboard": the image will be divided checkerboard style into alternate squares, so that individual frames will appear as checkerboards, alternating background with foreground.
  • "radially chopping", or "pi cutting": from the center of the image outwardly drawn radii at regular (or irregular) angular intervals will determine where one region ends and the next begins. Again, one or more nonadjacent regions to a frame. No/some repetition?
The more I think about this, the more I think that the clearest aggregate image with the least information about the whole picture contained in a single frame will be had with random pixels and some repetition spread evenly enough so that most pixels that repeat do not appear next to other repeating pixels very often. For specific definitions of "most" and "very often" and "next to" and "evenly enough" and such.

Jumping right ahead to implementation details ... assuming there to be an arbitrary upper limit on the number of frames that an image can be split into, and assuming that arbitrary upper limit may just happen to be equal to the size in bits of a single data register in the processor on the machine I've been doing all the development for this project on, and assuming that I want to implement several different ways of splitting images up into frames and that given this last assumption that I certainly do not want to duplicate a whole lot of code doing so but would much rather, say, pass an image to any one of several image-to-frames splitting routines which may exist at any point in time and to have those routines all return an efficient description of how that image should be split among any desired number of frames upto the arbitrary limit, I think that I can do this: create a "frame mask" of dimensions equal to that of the image whose elements are 32-bit integers, with bit 0 indicating whether the pixel should be present in frame 0, and bit n indicating whether the pixel should be present in frame n. The splitting routines would all be able to write this format quite easily, and the post-splitting frame-generating routine would have no trouble reading it at all. This also makes it easy to save particular random splittings that turn out to be favorable for later analysis or just to show your friends.

Now, I'm done rambling. Sometime this year (2008) I'll get git setup on dreamhost and post the code for all to see. It's not pasted here because, frankly, in its current state, it's not very interesting :)

UPDATE: repository'd: http://piratejon.com/git?p=scrapture.git

Labels: , , , , , , ,


9.12.08

 

Hack Jam: 3.12.08

Topics of discussion: AI, NLP, Cantaloupe, Primes, and the possibility of being 4/9ths Asian.

Just thought I'd dash out a quick note on what went on last week. PJ pulled off several examples of the ever-impressive trick: hacking with cat. That is to say, he wrote compilable c programs using cat, without re-writing.

Those of us who use interpreted languages were perhaps nonplussed. He showed a clear and utter disregard for the fact that coding is supposed to be hard.

However, I took back my own when the time came to sum an infinite series --- Lisp's native support for calculation with rational numbers allowed me thousands of digits accuracy without writing a bignum summer, making my solution only one line long.

This brought to mind an interesting challenge --- writing code with write-or-die...

Labels:


1.12.08

 

Breaking Screen Captures

So my homie Nick had an interesting idea that I thought I would spend a bit of time exploring. The idea is to break the screen-capturability of an image by replacing it with several rapidly refreshing frames that appeared to the naked eye over time as a coherent image but which individually conveyed no useful information and rather appeared scrambled. So that under screen capture you get a single unrecognizable frame. If you take a digital video (i.e. rapid series of screen captures) you may still be able to view the image by playing the video. But you should not be able to extract the image by itself without some eyemulating algorithm, which we hope would be extremely difficult to compute.

I think it would be quite possible to simply fraction the image into distinct geometric regions and display only part of the whole per frame, however it would be even easier to reassemble these than it was to separate them.

So, this week I'll begin by writing some stuff that blits an array of same-size bitmaps in order to the same region of the screen. Then I'll experiment with colors -- what does a solid red region look like, and what does a "red emulated" region look like, side-by-side, even? Does the number and difference/sameness between frames needed to convey a solid color depend on the color being conveyed? Is it easier or harder to emulate two different colors simultaneously -- that is to say, does color A being adjacent to color B change the requirement for number and quality of frames needed to achieve emulation? How about dithering A and B, and with different sizes of blocks? Hopefully the answers to these questions will suggest a formula for easily deconstructing an image into frames that are not easily reconstructible except by the natural power of the unaided (or lens-aided) human eye. At first I will treat color. Next will come shape, and how different shapes in different colors can be contrived to emulate a target image. Finally I hope to make an arbitrary jpeg unscreencapturable without hiding it in a hardware buffer -- this is in the long-term of course.

One application for this sort of technology is for controlling the shareability of images through websites or other software. In the past schemes have been implemented to disable mouse access of an image via Javascript or by otherwise manipulating the user-agent to act rather as the "content-provider-agent". I find this usage to be mostly heinous, but it is interesting nevertheless. More useful would be in CAPTCHA technology, where an algorithm might struggle to decode a series of frames that a human eye could read easily -- this may even lead to CAPTCHAs that are easier to read for humans but that bots could not reassemble into a coherent image, much less find the text in. This usage would be much less heinous, though it is still solving a problem that shouldn't exist. I think the most useful usage for this will be in elucidating characteristics of the human eye. And making fun of my colorblind friend Bob.

Labels: , ,


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]