So the opinions on our site were translating directly into behavior.We decided it was time for someone to challenge the Ok Cupid study. To put it frankly, data can be manipulated to show practically any result that the scientist would like it to.Before Ok Cupid declared it superior, it was likely 5-10% (200-300 photos split into 3 groups: smiling/not/flirty).
Then we ran each picture through a variety of analysis scripts (in our case, neural nets that detected smiles and eye contact) as well as tagged each one by hand until total agreement was reached. The explanation given (that they “[feared it] would skew [their] results”) is no explanation at all.
Finally, we used Photofeeler attractiveness ratings to gauge the success of the various photo types (smiling, not smiling, eye contact, no eye contact). our own: Ok Cupid’s data said that not smiling and not making eye contact was better. They didn’t have to “fear” anything because, in all likelihood, they first ran their numbers with these populations included.
Building on the previous point, there’s the question of how many pictures of men not smiling and not making eye contact were in the data set to begin with.
Giving Ok Cupid the benefit of the doubt, let’s say their sample was 50/50 male and female (even though it would likelier have skewed female). This is a good sample if you’re measuring a condition that will be present in all of the photos.
After all, Ok Cupid’s findings were based on behavior, not just talk, right? Like everyone else, we believed in Ok Cupid’s conclusions. But every time we looked into this, we found the same thing: daters who used Photofeeler for photo testing were getting right-swipes like never before.