Researchers here. The scientific method is unbelievably tedious. Way more tedious than you would think. So much so that people are willing to pay researchers to do it for them. A simple yes or no question takes weeks or months to answer if you’re lucky.
But the upside is that we can remove our own biases from the answer as much as possible. If you see an obvious difference between any 2 groups, then there’s little to no point in doing the scientific method. But if the difference is less clear, like borderline visible, then biases start to creep in. Someone who thinks there’s no difference will see the data and think there’s no difference. And someone who thinks there’s a difference will look at the data and think there’s a difference. The scientific method excels in these cases, because it gives us a relatively objective way to determine if there is a difference or not between 2 groups
P<0.05 means one in 20 studies are relevant just by chance. If you have 20 researchers studying the same thing then the 19 researchers who get non significant results don’t get published and get thrown in the trash and the one that gets a “result” sees the light of day.
Thats why publishing negative results is important but it’s rarely done because nobody gets credit for a failed experiment. Also why it’s important to wait for replication. One swallow does not make a summer no matter how much breathless science reporting happens whenever someone announces a positive result from a novel study.
TL;DR - math is hard
I feel like this applies more to flaws in how studies are published and the incentives surrounding that more than the scientific method.
P<0.05
How might one translate this to everyday language?
P<0.05 means the chance of this result being a statistical fluke is less than 0.05, or 1 in 20. It’s the most common standard for being considered relevant, but you’ll also see p<0.01 or smaller numbers if the data shows that the likelihood of the results being from chance are smaller than 1 in 20, like 1 in 100. The smaller the p value the better but it means you need larger data sets which costs more money out of your experiment budget to recruit subjects, buy equipment, and pay salaries. Gotta make those grant budgets stretch so researchers will go with 1 in 20 to save money since it’s the common standard.
To expand on the other fella’s explanation:
In psychology especially, and some other fields, the ‘null hypothesis’ is used. That means that the researcher ‘assumes’ that there is no effect or difference in what he is measuring. If you know that the average person smiles 20 times a day, and you want to check if someone (person A) making jokes around a person (person B) all day makes person B smile more than average, you assume that there will be no change. In other words, the expected outcome is that person B will still smile 20 times a day.
The experiment is performed and data collected. In this example, how many times person B smiled during the day. Do that for a lot of people, and you have your data set. Let’s say that they discovered the average amount of smiles per day was 25 during the experimental procedure. Using some fancy statistics (not really fancy, but it sure can seem like it) you calculate the probability that you would get an average of 25 smiles a day if the assumption that making jokes around a person would not change the 20-per-day average. The more people that you experimented on, and the larger the deviance from the assumed average, the lower the probability. If the probability is less than 5%, you say that p<0.05, and for a research experiment like the one described above, that’s probably good enough for your field to pat you on the back and tell you that the ‘null hypothesis’ of there being no effect from your independent variable (the making jokes thing) is wrong, and you can confidently say that making jokes will cause people to smile more, on average.
If you are being more rigorous, or testing multiple independent variables at once, as you might for examining different therapies or drugs, you starting making your X smaller in the p<X statement. Good studies will predetermine what X they will use, so as to avoid making the mistake of settling on what was ‘good enough’ as a number that fits your data.
Its strength is generating models of reality that have predictive power, and fine-tuning those models as new information is obtained.
Its weaknesses are a lack of absolute certainty and the inability to model that which has no detectable impact on reality.
Also never touching any why-questions
“Why”, when distinguished from “how”, is asking about the intent of a thinking agent. Neuroscience, psychology, and sociology exist for when thinking agents are involved. When they’re not, that type of “why” makes no sense.
I think that’s because there is no answer to “why” - At least not one that would satisfy the human mind.
The best we are ever going to be getting is “it just is”.
I don’t think this is true. “Why” questions merely need to be translated from the abstract to the tangible in order to be tested.
Perhaps you meant the philosophical and/or metaphysical? Even then, sometimes it’s just a matter of translating an abstract concept into something tangible to test. But, yes, some questions simply cannot be answered by science. But that doesn’t mean that a system of logic and testing cannot still be applied to find a reasonable answer. Even then, the scientific method can serve as a guide.
Truth in any context will always rely on facts, what can be proven by attainable evidence. Let logic be your guide. Fear no knowledge. Always remember to be good and empathetic and kind with that knowledge.
Truth in any context will always rely on facts
Why?
We got some 101’s in here beanbag chairin it up.
Speak for yourself, I’m having this conversation from a papasan chair I found on the side of the road
Yeah I’m the one on the beanbag sorry for the confusion guys
Because without facts, what you have is not “truth.” It’s either speculation or bullshit.
I think the point is this is paradoxical. Everything must be proven by facts and we cannot trust any general, abstract statement of its own accord, then how can we prove “everything must be proven by facts and we cannot trust any general, abstract statement of its own accord”? What if that’s a wrong assumption?
Maybe the truth is we don’t always need to rely on observable facts, but we don’t know that because we’re making the aforementioned assumption without having any proof that it’s correct.
axioms have entered the chat
The deeper you go in the why territory, the more abstract and tangental your axioms get.
So yeah. All facts and truths ultimately rest on foundations that are either kinda unobservable or unproven. Doesn’t make them less practical or true (by practical definitions) though.
To get a fact out of an observation requires interpretation and a desire-to-interpret. It’s observation translated into dreamstuff.
It is, in cases where it works, probably the best available method we have for finding the truth.
But there are a lot of questions it cannot answer, it can still give the wrong result just by chance, and the results are only as good as the assumptions you made. The last point is particularly important, and can allow bias to creep in even when all the experiments are done correctly.
Finally, real scientists often do not (and sometimes cannot) follow the scientific method perfectly, due to all sorts of reasons.
Strength:
- Allows us to predict the future and understand reality
Weakness:
- Only works on falsifiable hypotheses
- Relies on peer review and replication which are pretty dead
- Requires a basic understanding of math in order to understand why it works
strength is it’s replicable. Not just somebody claiming something without justifying it can happen.
This is totally false in practice.
How is this incorrect? In which field? And how do you confirm
youthe validity of your methodology?Replication rarely happens and in many cases is outright impossible due to lack of shared code.
Things should be replicable, but that hasn’t been the case for a while.
the correct term you need is ‘unachievable’, not ‘false’. […] anyway, it depends on the field and type of study.
That’s just wordplay to make the problem seem like it’s not as big of a problem.
Common standards for language formally used in a specific field/profession/discipline aren’t “wordplay” lol
This isn’t a professional forum. Playing the “it’s a technical term” game is absolutely wordplay.
Here’s a great article published yesterday on how science seems to be fueling the meaning crisis:
https://bigthink.com/13-8/why-science-must-contend-with-human-experience/