If you have kids, chances are you’ve worried about their presence on social media.
Who are they talking to? What are they posting? Are they being bullied? Do they spend too much time on it? Do they realise their friends’ lives aren’t as good as they look on Instagram?
We asked five experts if social media is damaging to children and teens.
Four out of five experts said yes
The four experts who ultimately found social media is damaging said so for its negative effects on mental health, disturbances to sleep, cyberbullying, comparing themselves with others, privacy concerns, and body image.
However, they also conceded it can have positive effects in connecting young people with others, and living without it might even be more ostracising.
The dissident voice said it’s not social media itself that’s damaging, but how it’s used.
Here are their detailed responses:
If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: firstname.lastname@example.org
Karyn Healy is a researcher affiliated with the Parenting and Family Support Centre at The University of Queensland and a psychologist working with schools and families to address bullying. Karyn is co-author of a family intervention for children bullied at school. Karyn is a member of the Queensland Anti-Cyberbullying Committee, but not a spokesperson for this committee; this article presents only her own professional views.
But these support networks are not always in place. Geographical or emotional distance from family members, conflict with friends, and the tendency for people with mental illness to withdraw from others means these individuals are often isolated.
In two Australian surveys – a national snapshot survey of Australian adults with psychosis and another looking at adults with long-term mental health conditions such as depression, anxiety, and psychosis – only one-quarter reported receiving regular assistance from family or friends. About three out of every four people living with mental illness reported the absence of a carer or other informal support.
For someone living with mental illness, having a carer or support person facilitates continuity of care and provides advocacy and support, particularly during and after episodes of acute illness.
People with mental illness are at their most vulnerable following discharge from hospital or other inpatient facilities. Reintegrating back into society can be challenging. And during this time, the risk of suicide is high.
It’s somewhat unsurprising, then, that people without a carer or support network face poorer outcomes in terms of recovery.
How does having a carer help?
Following hospitalisation for an acute episode of mental illness, people typically require assistance with a myriad of tasks.
They may need help with day-to-day activities like grocery shopping, cooking and cleaning. People in recovery may also need support to re-engage with the community, including returning to work or study.
They will likely benefit from assistance in adhering to care plans, including managing medications and attending follow-up appointments. A person recovering from hospitalisation for an eating disorder may need support from family members to ensure they are eating as much as they need to at mealtimes.
As well as these practical supports, someone recovering from an acute episode of mental illness requires ongoing emotional support which reaffirms their sense of self and capacity to recover.
We surveyed 105 Australian mental health carers and found the most commonly reported care tasks were “providing encouragement and motivation”, “prompting their loved ones to do things”, and “liaising with health professionals”.
Carers spent the majority of their caring time providing emotional support, and the least of their care time assisting with activities of daily living, such as feeding and dressing.
What about discharged patients who don’t have informal supports?
The transition from hospital to home can be frightening and difficult. Patients tend to become accustomed to the day-to-day hospital routine and in turn can feel increasingly disconnected from the outside world. These challenges are exacerbated in the absence of support from health professionals, family or friends.
One study of older psychiatric patients found absent or dysfunctional family support was one of the strongest predictors of hospital readmission in the 18 months after discharge. Patients without reliable family support were nearly twice as likely to be readmitted to hospital than those who had dedicated family carers.
Similar results have been found in broader and larger samples. Among 1,384 adult patients admitted to a psychiatric hospital, unreliable social support at discharge was associated with an increased risk of being readmitted to hospital within one year.
Alongside the absence of family support, lack of connection with community-based services and supports is similarly associated with poor post-discharge outcomes.
Discharge planning and transitional programs have been established to provide additional practical and emotional support to people with mental illness after they leave hospital. These have reported promisingresults in terms of preventing hospital readmission and promoting engagement with community treatment (such as psychological support services, medication monitoring, and alcohol and drug recovery services). Further research which identifies the key benefits of such programs is needed, using larger controlled studies.
Another solution is improving housing support for mental health patients after discharge. Programs such as Housing Mental Health Pathways in Victoria assist people with mental illness and a history of homelessness who have no suitable accommodation at the time of hospital discharge. More programs like this are needed.
The time following hospitalisation is one of the most vulnerable for people with mental illness. More needs to be done at both a community and policy level to better support people during this period – particularly those without a carer or informal support network.
If this article has raised issues for you or you’re concerned about someone you know, call Lifeline on 13 11 14.
Many of those who’ve suffered from illness or disease would have received the advice to “stay positive”. Is this sage advice that can truly have a positive effect on health, or an added burden for someone who is already suffering – the need to also feel good about it?
We asked five experts in various fields whether a positive mindset can affect outcomes for those suffering from illness and disease.
Five out of five experts said yes
However, they had some important caveats. It depends on the disease – for example, one expert said studies in cancer have not found positive thinking affects disease progression or the likelihood of early death.
And while our mental health can have powerful effects on our physical health, the perceived need to “stay positive” can be an added burden during a difficult time. So it’s also important to remember grief is normal.
Here are the experts’ detailed responses:
If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: email@example.com
Erica Sloan is a member of the Scientific Advisory Board of Cygnal Therapeutics. Jayashri Kulkarni receives funding from the NHMRC.
Among 18 to 25 year olds, one in three (35%) reported feeling lonely three or more times a week. We also found that higher levels of loneliness increases a young adult’s risk of developing depression by 12% and social anxiety by 10%.
Adolescents aged 12 to 17 reported better outcomes, with one in seven (13%) feeling lonely three or more times a week. Participants in this age group were also less likely to report symptoms of depression and social anxiety than the 18 to 25 year olds.
Young adulthood can be a lonely time
Anyone can experience loneliness and at any point in life but it’s often triggered by significant life events – both positive (such as new parenthood or a new job) and negative (bereavement, separation or health problems).
Young adults are managing new challenges such as moving away from home and starting university, TAFE or work. Almost half (48%) of the young adults in our survey lived away from family and caregivers. Almost 77% were also engaged in some sort of work.
Young people at high school may be buffered from loneliness because they’re surrounded by peers, many of whom they have known for years. But once they leave the safety of these familiar environments, they are likely to have to put in extra effort to forge new ties. They may also feel more disconnected from the existing friends they have.
During this transition to independence, young adults may find themselves with evolving social networks, including interactions with colleagues and peers of different ages. Learning to navigate these different relationships requires adjustment, and a fair bit of trial and error.
Is social media use to blame?
The reliance on social media to communicate is often thought to cause loneliness.
No studies I’m aware of have examined the cause-effect between loneliness and social media use.
There is some evidence that those who are lonely are more likely to use the internet for social interactions and spend less time in real life interactions. But it’s unclear whether social media use causes more loneliness.
While social media can be used to replace offline relationships with online ones, it can also be used to both enhance existing relationships and offer new social opportunities.
Further, a recent study found that the relationship between social media use and psychological distress was weak.
Is loneliness a cause or effect of mental ill health?
Loneliness is bad for our physical and mental health. Over a six-month period, people who are lonely are more likely to experience higher rates of depression, social anxiety and paranoia. Being socially anxious can also lead to more loneliness at a later time.
The solution isn’t as simple as joining a group or trying harder to make friends, especially if one also already feels anxious about being with people.
While lonely people are motivated to connect with others they are also more likely to experience social interactions as stressful. Brain imaging studies show lonely people are less rewarded by social interactions and are more attuned to distress of others than less lonely counterparts.
When lonely people do socialise, they are more likely to engage in self-defeating actions, such as being less cooperative, and show more negative emotions and body language. This is done in an (often unconscious) attempt to disengage and protect themselves from rejection.
Lonely people are also more likely to find reasons people cannot be trusted or do not live up to particular social expectations, and to believe others evaluate them more negatively than they actually do.
What can we do about it?
One way to address these invisible forces is to help young people think in more helpful ways about friendship, and to understand how they can influence others through their emotions and behaviours.
Parents, educators and counsellors can play a role in educating children and young people about the dynamics of evolving friendships. This might involve helping the young person to evaluate their own behaviours and thought patterns, understand how they play an active role in building relationships, and to support them to interact differently.
helping young people identify their strengths and learn how they’re important in forging strong, meaningful relationships. If the young person identifies humour as a strength, for instance, this might involve discussing how they can use their humour to establish rapport with others.
Educational programs can do more to address the social health of young people and these discussions can be integrated into health education classes.
Additionally, because young people are already frequent and competent users of technology, carefully crafted digital tools could be developed to target loneliness.
These tools could help young people learn skills to develop and maintain meaningful relationships. And because lonely people are more likely to avoid others, digital tools could also be used as one way to help young people build social confidence and practise new skills within a safe space.
A cornerstone of any solution, however, is to normalise feelings of loneliness, so feeling lonely is seen not as a weakness but rather as an innate human need to connect. Loneliness is likely to negatively impact on health when it is ignored, or not properly addressed, allowing the distress to persist.
Identifying and normalising feelings of loneliness can help lonely people consider different avenues for action.
We don’t yet know the lifelong impact of loneliness on today’s young people, so it’s important we take action now, by increasing awareness and giving young people the tools to develop and maintain meaningful social relationships.
Michelle Lim, the author of this piece, is available for a Q+A on Tuesday the 1st October from 3pm-4pm AEST to take questions on this topic. Please post your questions in the comments below.
Although we continue to understand more about self-injury, there remains significant public stigma towards people who self-injure.
This stigma can make people who self-injure reluctant to seek help or disclose their experiences to others. Research shows only half the people who are already seeing a therapist for mental health concerns will tell even their therapist about it.
Self-injury is often thought of as a “teen fad”, and as especially prevalent among teenage girls. It’s true self-injury usually starts during adolescence, but people of all ages and genders self-injure. Recent research shows the second most common time to start self-injury is in a person’s early 20s.
Consistent with this, self-injury is common among university students; up to one in five report a history of self-injury, with about 8% self-injuring for the first time during university.
Although more women in treatment settings report self-injury, it’s likely that in community settings, self-injury is equally common among males and females. This may be because women are more likely than men to seek help.
Myth 2: people who self-injure are attention-seeking
One of the more pervasive myths about self-injury is that people self-injure to seek attention. Yet, self-injury is usually a very secretive behaviour, and people go to great lengths to hide their self-injury.
Other common reasons people self-injure are to punish themselves or to stop an escalating cycle of painful thoughts and feelings. People may self-injure to communicate how distressed they are, particularly if they have trouble verbally expressing their feelings. In other words, their self-injury is a cry for help.
A recent study found influencing and punishing others was the least likely reason for self-injury.
Myth 3: people who self-injure are suicidal
By definition, non-suicidal self-injury is not motivated by a desire to end life. In addition to serving a different function, the frequency of suicidal and non-suicidal behaviours differs. That is, suicide attempts are generally infrequent, whereas non-suicidal behaviours can occur more often.
The methods used, the outcomes of the behaviours, and appropriate treatment responses also all differ. People at risk of suicide may require immediate and more intensive intervention; although both non-suicidal self-injury and suicidal behaviour need to be taken seriously and responded to compassionately.
For these reasons, it’s important to be clear when we are talking about self-injury and when we are talking about suicidal thoughts or behaviour.
Myth 4: there is a self-injury ‘epidemic’
While many people report at least one instance of self-injury, fewer people engage in repeated episodes.
Further, there is little evidence rates of self-injury have increased in recent years. Hospital records indicate an increase in presentations for “deliberate self-harm”, but these are predominantly poisonings, not self-injury.
Other studies show more people reporting self-injury, but it’s unclear whether this is because people are more comfortable disclosing their self-injury, or because self-injury is increasing.
Research suggests when the methodologies of the studies are taken into account, rates of self-injury have not increased over time.
Internet and social media are highly relevant to many people who self-injure as they offer a means to obtain social support, share their experiences with others who have been through similar things, and obtain coping and recovery-oriented resources (for example, stories about other people’s experiences).
This is not surprising given the stigma attached to self-injury, which leaves many people who self-injure feeling isolated from others.
Despite these benefits, there are concerns online material, including graphic images and videos depicting self-injury, may trigger people to engage in self-injury. While only a few studies explicitly examine this, there is some evidence viewing graphic imagery is associated with self-injury. However, images of scars may not be as triggering.
There are also concerns exposure to messages that carry hopeless themes (for example, “it’s impossible to stop self-injuring”), may contribute to continued self-injury and impede help-seeking.
But at the same time, exposure to more positive messages may offer hope about recovery.
Self-injury is a common behaviour engaged in by a broad spectrum of people. Given its association with psychological difficulties and suicide risk, it’s critical self-injury be taken seriously and not dismissed or glossed over.
People who engage in self-injury need to know it’s okay to seek support (from friends, family, and health professionals) and that people can and do recover.
For anyone who knows someone who self-injures, it’s important to respond to that person in a non-judgemental and compassionate manner. Just knowing there is someone supportive who is willing to listen can make a big difference to a person who self-injures.
If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.
Depression is a serious disorder marked by disturbances in mood, cognition, physiology and social functioning.
People can experience deep sadness and feelings of hopelessness, sorrow, emptiness and despair. These core features of depression have expanded to include an inability to experience pleasure, sluggish movements, changes in sleep and eating behaviour, difficulty concentrating and suicidal thoughts.
The first diagnostic criteria were introduced in the 1980s. Now we have an expanded set of concepts for describing depression, from mild to severe, major depressive disorder, chronic depression and seasonal affective disorder.
How we describe and classify mental disorders is a fundamental step towards explaining and treating them. When carrying out research on people with depression, diagnostic categories such as major depressive disorder (MDD) shape our explanations. But if the descriptions are wrong, our explanations will suffer as a consequence.
The problem is that classification and explanation are not completely independent tasks. How we classify disorders directly impacts how we explain them, and these explanations in turn impact our classifications. In this way, psychiatry is stuck in a circular trap.
The danger – for depression and for other mental disorders – is that we tailor our explanations to fit the classifications available and that the classifications are inadequate.
Traditionally, research has focused on understanding mental disorders as classified in manuals such as the Diagnostic and Statistical Manual of Mental Disorders. Most of these disorders are what we call “psychiatric syndromes” – clusters of symptoms that hang together in some meaningful way and are assumed to share a common cause.
But many of these syndromes are poorly defined because disorders can manifest in different ways in different people. This is known as “disorder heterogeneity”. For example, there are 227 different symptom combinations that meet the criteria for major depressive disorder.
The other problem is that diagnostic criteria often overlap across multiple disorders. Symptoms of restlessness, fatigue, difficulty concentrating, irritability and sleep disturbance can be common for people experiencing generalised anxiety disorder or major depressive disorder.
This makes studying disorders like depression difficult. While we may think we are all explaining the same thing, we are actually trying to explain completely different variations of the disorder, or in some cases a completely different disorder.
A significant challenge is how to advance classification systems without abandoning their descriptive value and the decades of research they have produced. So what are our options?
A categorical approach, which sees disorders as discrete categories, has been the most prominent model of classification. But many researchers argue disorders such as depression are better seen as dimensional. For example, people who suffer from severe depression are just further along a spectrum of “depressed mood”, rather than being qualitatively different from the normal population.
The former relies on current diagnostic categories and all the problems that come with that. The latter relies on neuro-centrism, which means mental disorders are viewed as disorders of the brain and biological explanations are used in preference to social and cultural explanations.
A new approach called the symptom network model offers a departure from the emphasis on psychiatric syndromes. It sees mental disorders not as diseases but as the result of interactions between symptoms.
In depression, an adverse life event such as loss of a partner may activate a depressed mood. This in turn may cause neighbouring symptoms, such as insomnia and fatigue. But this model is only descriptive and offers no explanation of the processes that cause the symptoms themselves.
A simple way forward
We suggest that one way of advancing understanding of mental disorders is to move our focus from psychiatric syndromes to clinical phenomena.
Phenomena are stable and general features. Examples in clinical psychology include low self-esteem, aggression, low mood and ruminative thoughts. The difference between symptom and phenomena is that the latter are inferred from multiple information sources such as behavioural observation, self-report and psychological test scores.
For example, understanding the central processes that underpin the clinical phenomenon of the inability to experience pleasure (anhedonia) will provide greater insight for cases that are dominated by this symptom.
In this way we can begin to tailor our explanations for individual cases rather than using general explanations of the broad syndrome “major depressive disorder”.
The other advantage is that the central processes that make up these phenomena are also more likely to form reliable clusters or categories. Of course, achieving this understanding will require greater specification of clinical phenomena we want to explain. It is not enough to conclude that a research finding (such as low levels of dopamine) is associated with the syndrome depression, as the features of depression may vary significantly between individuals.
We need to be more specific about exactly what people with depression in our research are experiencing.
Building descriptions of clinical phenomena will help us to better understand links between signs, symptoms and causes of mental disorder. It will put us in a better position to identify and treat depression.
If you have a question you’d like an expert to answer, send it to firstname.lastname@example.org.
Where do phobias come from? – Olivia, age 12, Strathfield, Sydney.
Phobias are an intense fear of very specific things like objects, places, situations or animals. The most common phobias for children and teens are phobias of specific animals such as dogs, cats or insects.
When someone suffers from a phobia, they tend to avoid these places or things at all costs. That can be very hard to do and often leads to a lot of other problems.
There are many different factors that might make it more likely for someone to develop a phobia.
However, research tells us that to some degree specific phobias are learned. In addition, factors such as life experiences, your personality, and even how the people around you cope all contribute to developing a phobia or not.
Specific phobias are very common, especially among children and adolescents. Research tells us that approximately 10% of children will experience a specific phobia, making this type of anxiety one of the most common anxiety disorders affecting young people.
Here are three main learning scenarios that may influence whether or not you develop a phobia.
Seeing other people (such as parents or friends) get really scared in a specific situation, or around a particular object or animal. This is called “modelling”. When you see someone else “model” a fear reaction to certain things, you may learn to be afraid of the same thing.
Hearing or reading scary stories about a situation, object or animal. For example, a parent who always tells you, “dogs are dangerous”, “never approach a dog”, “beware of dogs”, teaches you that ALL dogs are dangerous, ALL of the time, which may contribute to you developing a fear or phobia of dogs.
Having a frightening experience with a particular object, animal or situation. We call this “direct conditioning”. For example, you may have been growled at or even bitten by a dog; or be swept up in a rip in the ocean; or have had a tree fall on your house in a bad storm. These experiences are often very scary, and some children may then feel afraid whenever they are in that situation again.
It is important to remember, however, that not all children who see, hear or experience bad things develop a specific phobia. There are other things that might contribute. Research suggests phobias often run in families, so there may be a genetic link. Personality (or what doctors call “temperament”) may even play a role.
The good news
The good news is that there are many other factors that might help to protect children or adolescents from developing a phobia, even if you have had a very bad experience. For example, support from family and friends can help and comfort you when something scary happens.
Some research suggests that being optimistic can protect you from fear. Being someone who thinks about the world and themselves in a really positive way – seeing the glass half full instead of half empty – may reduce the impact of or development of anxiety and fears.
And finally, the most powerful way to stop a fear turning into a phobia is to face your fears – even when you feel nervous or scared. For example, you might feel really scared about giving a speech. But if you practise and do some public speaking, you might realise it’s not as bad as you imagined!
You may learn you are braver and stronger than you know.
The mental health survey will be run in 2020, with new data on how common mental illness is due the year after. This is a welcome announcement for the mental health sector, because information gathered in a survey like this can be used to shape policy reform.
But eating disorders, a major category of mental illnesses, have been neglected by all previous important data collection initiatives in Australia so far. Notably, they were missing from the last national mental health surveys in 1997 and 2007.
Eating disorders are not yet an official part of this new survey, but we understand they are being considered.
If people with eating disorders are not counted, they don’t count. In other words, we need to know who has these severe and debilitating conditions, and then work towards improving the treatment and supports available for them.
National surveys ask the public if they have experienced symptoms of various mental illnesses, either in their lifetime or during the past 12 months.
People who answer “yes” to particular clusters of symptoms are “diagnosed”, or assumed to have had the illness.
Asking the public about their symptoms is the best way to understand how common mental illnesses are. This is because most people with a mental illness don’t seek treatment and may never have had a diagnosis. So collecting data from health services or based on reported diagnoses doesn’t provide a full picture.
Also, for some mental illnesses, such as anorexia nervosa or psychosis, people might not realise they have a diagnosable illness. But they are likely to respond “yes” to direct questions about their experiences with body dissatisfaction or thinking difficulties.
Eating disorders are more than just anorexia
A person with anorexia nervosa engages in dangerous behaviours to maintain a very low body weight, or to lose more weight. Although most people have heard of it, anorexia is not common. We know this from other countries who have previously studied the prevalence of anorexia in community surveys.
That being said, it’s very serious and can be fatal. It has the highest mortality of all non-substance use mental disorders, and one in five of those deaths is by suicide.
People with eating disorders often have a negative body image, and a strong perception their self-worth is tied to their appearance or body weight.
Burden of disease
Every year in Australia, millions of years of healthy life are lost because of injury, illness or premature deaths in the population. This is known as “burden of disease”.
Like national surveys, burden of disease studies are extremely important for planning and funding health services. They use prevalence statistics, or how many people per 100,000 Australians are assumed to have a particular illness. Given we don’t have good data on how prevalent eating disorders are, we likely underestimate their burden of disease.
The recently released Australian Burden of Disease Study 2015 lists eating disorders among the most burdensome illnesses for Australian females, being the tenth leading cause of total burden of disease for females aged 5-14 and women aged 25-44.
Eating disorders were estimated to cost the health system A$99.9 million in the year 2012 alone.
Better treatment and prevention of eating disorders would reduce the cost and the burden of disease. But we need the data to show where the treatment gaps are and how to fund better services.
There are many promising elements of the proposed Intergenerational Health and Mental Health Study. These include surveying multiple people in a family, gathering physical and mental health data, and a target of more than 60,000 Australians. But it’s time eating disorders were included.
Unfortunately, we cannot use this type of evidence to promote eating chocolate as a safeguard against depression, a serious, common and sometimes debilitating mental health condition.
This is because this study looked at an association between diet and depression in the general population. It did not gauge causation. In other words, it was not designed to say whether eating dark chocolate caused a reduction in depressive symptoms.
People in the study reported what they had eaten in the previous 24 hours in two ways. First, they recalled in person, to a trained dietary interviewer using a standard questionnaire. The second time they recalled what they had eaten over the phone, several days after the first recall.
The researchers then calculated how much chocolate participants had eaten using the average of these two recalls.
Dark chocolate needed to contain at least 45% cocoa solids for it to count as “dark”.
The researchers excluded from their analysis people who ate an implausibly large amount of chocolate, people who were underweight and/or had diabetes.
The remaining data (from 13,626 people) was then divided in two ways. One was by categories of chocolate consumption (no chocolate, chocolate but no dark chocolate, and any dark chocolate). The other way was by the amount of chocolate (no chocolate, and then in groups, from the lowest to highest chocolate consumption).
The researchers assessed people’s depressive symptoms by having participants complete a short questionnaire asking about the frequency of these symptoms over the past two weeks.
The researchers controlled for other factors that might influence any relationship between chocolate and depression, such as weight, gender, socioeconomic factors, smoking, sugar intake and exercise.
What did the researchers find?
Of the entire sample, 1,332 (11%) of people said they had eaten chocolate in their two 24 hour dietary recalls, with only 148 (1.1%) reporting eating dark chocolate.
A total of 1,009 (7.4%) people reported depressive symptoms. But after adjusting for other factors, the researchers found no association between any chocolate consumption and depressive symptoms.
However, people who ate dark chocolate had a 70% lower chance of reporting clinically relevant depressive symptoms than those who did not report eating chocolate.
When investigating the amount of chocolate consumed, people who ate the most chocolate were more likely to have fewer depressive symptoms.
What are the study’s limitations?
While the size of the dataset is impressive, there are major limitations to the investigation and its conclusions.
First, assessing chocolate intake is challenging. People may eat different amounts (and types) depending on the day. And asking what people ate over the past 24 hours (twice) is not the most accurate way of telling what people usually eat.
Then there’s whether people report what they actually eat. For instance, if you ate a whole block of chocolate yesterday, would you tell an interviewer? What about if you were also depressed?
This could be why so few people reported eating chocolate in this study, compared with what retail figures tell us people eat.
Finally, the authors’ results are mathematically accurate, but misleading.
Only 1.1% of people in the analysis ate dark chocolate. And when they did, the amount was very small (about 12g a day). And only two people reported clinical symptoms of depression and ate any dark chocolate.
The authors conclude the small numbers and low consumption “attests to the strength of this finding”. I would suggest the opposite.
Finally, people who ate the most chocolate (104-454g a day) had an almost 60% lower chance of having depressive symptoms. But those who ate 100g a day had about a 30% chance. Who’d have thought four or so more grams of chocolate could be so important?
This study and the media coverage that followed are perfect examples of the pitfalls of translating population-based nutrition research to public recommendations for health.
My general advice is, if you enjoy chocolate, go for darker varieties, with fruit or nuts added, and eat it mindfully. — Ben Desbrow
Blind peer review
Chocolate manufacturers have been a good source of funding for much of the research into chocolate products.
While the authors of this new study declare no conflict of interest, any whisper of good news about chocolate attracts publicity. I agree with the author’s scepticism of the study.
Just 1.1% of people in the study ate dark chocolate (at least 45% cocoa solids) at an average 11.7g a day. There was a wide variation in reported clinically relevant depressive symptoms in this group. So, it is not valid to draw any real conclusion from the data collected.
For total chocolate consumption, the authors accurately report no statistically significant association with clinically relevant depressive symptoms.
However, they then claim eating more chocolate is of benefit, based on fewer symptoms among those who ate the most.
In fact, depressive symptoms were most common in the third-highest quartile (who ate 100g chocolate a day), followed by the first (4-35g a day), then the second (37-95g a day) and finally the lowest level (104-454g a day). Risks in sub-sets of data such as quartiles are only valid if they lie on the same slope.
The basic problems come from measurements and the many confounding factors. This study can’t validly be used to justify eating more chocolate of any kind. — Rosemary Stanton
Research Checks interrogate newly published studies and how they’re reported in the media. The analysis is undertaken by one or more academics not involved with the study, and reviewed by another, to make sure it’s accurate.
HILDA surveys collate data on the “reported diagnosis” of depression and anxiety disorders. Many people with these conditions have remained undiagnosed by a health practitioner, so it could simply be a matter of more people seeking professional help and getting diagnosed.
To find out whether there is a real increase, we need to survey a sample of the public about their symptoms rather than ask about whether they have been diagnosed. This has been done for almost two decades in the National Health Survey.
This graph shows the percentage of the population reporting very high levels of depression and anxiety symptoms over the previous month, from 2001 to 2017-18.
Rather than worsening, the nation’s mental health has been steady over this period.
Shouldn’t our mental health be improving?
So it seems while our mental health is not getting worse, we are more likely to get diagnosed. With increased diagnosis, it’s no surprise Australians have been rapidly embracing treatments for mental-health problems.
Psychological treatment has also skyrocketed, particularly after the Australian government introduced Medicare coverage for psychology services in 2006. There are now around 20 psychology services per year for every 100 Australians.
The real concern is why we’re not seeing any benefit from these large increases in diagnosis and treatment. In theory, our mental health should be improving.
There are two likely reasons for the lack of progress: the treatments are often not up to standard and we have neglected prevention.
Antidepressants, for example, are most appropriate for severe depression, but are often used to treat people with mild symptoms that reflect difficult life circumstances.
Psychological treatments can be effective, but require many sessions. Around 16 to 20 sessions are recommended to treat depression. Getting a couple of sessions with a psychologist is too often the norm and unlikely to produce much improvement.
The big area of neglect in mental health is prevention. Australia achieved enormous gains in physical health during the 20th century, with big drops in premature death. Prevention of disease and injury played a major role in these gains.
We might expect a similar approach to work for mental-health problems, which are the next frontier for improving the nation’s health. However, while we have been putting increasing resources into treatment, prevention has been neglected.
Parents who are in conflict with each other and fight a lot, for example, may increase their children’s risk for depression and anxiety disorders, while parents who show warmth and affection towards their children decrease their risk. Parents can be trained to reduce these risk factors and increase protective factors.
Yet successive Australian governments have lacked the political will to invest in prevention.
Where to next?
There is an important opportunity to consider whether Australia should be heading in a very different direction in its approach to mental health. The Australian government has asked the Productivity Commission to investigate mental health.
While we’ve had many previous inquiries, this one is different because it’s looking at the social and economic benefits of mental health to the nation. This broader perspective is important because action on prevention is a whole-of-government concern with resource implications and benefits that extend well beyond the health sector.