Stega

Forums Positions Stega

This topic contains 5 replies, has 1 voice, and was last updated by  josh February 13, 2018 at 10:32 am.

  • Author
    Posts
  • #8683

    josh

    In the creation of algorithms for steganography & choice of parameters, there are tradeoffs between different goals:

    1. performance (ratio of carrier image bits to allowable message bits),
    2. how suspicious does the image or other carrier look at a glance
    3. how suspicious does the image or other carrier look after data analysis by someone who doesn’t have the password
    4. how suspicious does it look after decoding with a password (i.e. is there another level of secrecy below?)
    5. how hard is it to tamper with the content or play man-in-the-middle

    For the kind of mass market, light application that I am interested in (an alternative to in key writing), I’m mainly interested in improving 1+2. In that context, I thought about how I would go about encoding medium length text in images of user friendly size that still look good afterwards.

    First thoughts:
    1) Encode English text bits with some kind of basic Markov code for English – e.g. if it is only letter based, then the most common letter, ‘e’, has a shorter bit code than uncommon letters like ‘z’ and punctuation chars and ascii line feed or whatever
    2) apply a compression algorithm to the bits from 1 – e.g. bzip2 or gzip
    3) When a RGB image is analyzed as hue, saturation, value/lightness, changes to hue and extremes of lightness are the most noticeable. So I would design the algorithm to maintain the original image dimensions, the hue values at each pixel, and the extreme values of light and dark, and see how much change can be put into the rest without hurting the perceived quality. The general determination can be fixed in advance as part of the algorithm. Let’s say that we also can determine, at each pixel location, whether the adjustment that was made was increasing or decreasing in lightness and increasing or decreasing in saturation. This could be selected based on a string of bits that is some function of the image parameters which have not changed, or it could just alternate.
    4) Based on the description above, the image contains a particular set of locations where modification could have occurred and a set of theoretical values that could appear at that location based on the constraints. The encoding algorithm has selected 1 actual RGB combo at that location based on the data to be encoded. The bits of message to analyze can then be understood as the ordinal value of that specific RGB selection. E.g. 17/35)

    Intuitively, I feel the description above will yield better performance & perceptual quality compared to fiddling with a fixed number of low order bits in RGB space.

    • #8685

      josh

      In this image, there are lots of “blacks” and “whites” which are not extremes of lightness or darkness. I should modify the statement above about hue to say “don’t change the perception of hue” rather than no change to the floating point value that is already approximated on the kind of digital scale one sees in color picker widgets.

    • #8693

      josh

      Better than random choice of whether to go up or down at each adjustable location in saturation/lightness would be form a kind of low pass/grid filter of the running “error adjustment” and automatically pick the direction at the next locations which either reduces that or minimizes the additional error. It would be okay to do that in RBG space. When an adjustment is made at location x,y, – the actual RGB delta at that location is added to a running lower pass filter of RGB delta in the surrounding neighborhood. When the next location is looked at, we have an RBG delta vector to show the direction of perceptual perturbation there, and the direction of saturation/value adjustment is chosen to be the one closest to the opposite of that delta, for compensation.

  • #8701

    josh

    Another point about web pages that is sort of obvious, but maybe worth mentioning – the info holding capacity of the image is related to its actual size, while the web page display size can be smaller than that, & artifacts are harder to see on smaller images.

    If I have been looking at examples so far (not seeing the original) they look fine to me, even at a medium size.

  • #8709

    josh

    Is this a case of an altered image where the chunks of snow/ice flying through the air look better in the original. One wouldn’t notice without side by side comparison & different pictures can always be chosen, so it doesn’t seem like much of an issue. But if I see the original I could suggest something.

You must be logged in to reply to this topic.