At the Oslo thread I pointed out, that using more or less "unknown" algorythms on any image does seem to have some risks,
that I would like to define in this thread.
The reason why I opened this thread is, that I don't want to go too offtopic in the Oslo thread
and to just have a better place to talk about this topic from a more general point of view.
In my opinion this discussion fits right into this research forum.
Let's start from this point:
So, you're saying that this color adjustment algorythm, which apparently estimates instructions from a given, but to you unknown, pool of data, actually prooves something?
The problems we have to face:
* We can only interpret the visual output of the algorythm
* By not knowing how the algorythm works on a small scale, we can't estimate it's output
* Some algorythms have the same name and do the same in general, but on a small scale the result is different from each other
A (SIMPLICISTIC) LITTLE STORY
Imagine that some time ago you somehow broke the law and at that time the police took a nice photograph of your face to safe it in some sort of database in case they need it in the future.
Great, now imagine that the following picture is you, more specifically your face:
Picture showing your face
One day you get a visit by two "friendly" policemen telling you,
that you have to come with them since there is evidence that you are involved in some sort of a crime.
Of course you didn't do anything and follow the policemen.
At the policestation they present you the following picture:
Picture showing the person that commited the crime
Let's assume that this picture shows the face of a person, taken by a security cam, but it is too dark to be able to see who the person is.
You of course claim that it obviously is not you on that picture.
Now, what the police does is the following: They do some nice color adjustment and suddenly, it's you!
= ?
HOW IS THIS POSSIBLE? (A MORE INDEPTH VIEW)
Okay, so let's take a look at the (very simplicistic) algorythm the police used.
Let's assume that the color adjustment that is used is something like a simple statemashine consisting of 3 states.
STATE 1: If input BLACK -> output is Grey
STATE 2: If input WHITE -> output is WHITE
STATE 3: If input GREY -> output is WHITE
-> !
So this is simply the way this speicific algorythm works.
The problem:
A certain image on which a certain algorythm will be used on will always give a certain result.
But the result hasn't anything to do with the original, there's no "higher" intelligence behind all that. The algorythm uses given instructions, which you generally don't know.
Just imagine they change one single state from the example above and the police would have never visited you because it's not showing you then.
Some questions coming to my mind:
Does changing images really proove anything?
How does this actually affect your/our research?
EPILOG AND MY OPINION
Whenever I try to figure out wheter an image is fake or real, I always try to see if I could go to court with the information I have.
Just imagine, the guy from my story above goes to court and the question is: Does the picture on which the algorythm has been used on proove anything at all?
If I had to judge: No.
I'd be glad to see more people participating in this discussion.