There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online



Shutterstock

Mark Pearson, Griffith University

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.




Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution


Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:

1. Distinguishing verified facts from myths, rumours and opinions

Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.

And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.

We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.

Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?

Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.

Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.

2. Mixing up your media and social media diet

We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.

For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.

By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.

3. Being skeptical and assessing the factual premise of an opinion

Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.

Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.

The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.

This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.




Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation


4. Exploring the background and motives of media and sources

A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.

For instance, these are some key questions to ask:

  • who is behind that think tank whose views you are retweeting?

  • who owns the online newspaper you read and what other commercial interests do they hold?

  • is your media diet dominated by news produced from the same corporate entity?

  • why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?

  • what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?

Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


5. Reflecting and verifying before sharing

We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.

Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”

You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.The Conversation

Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake?



Twitter screenshots/Unsplash, Author provided

T.J. Thomson, Queensland University of Technology; Daniel Angus, Queensland University of Technology, and Paula Dootson, Queensland University of Technology

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330”.

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Seeing shouldn’t always be believing

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.




Read more:
Facebook is tilting the political playing field more than ever, and it’s no accident


Could you spot a doctored image?

Consider this photo of Martin Luther King Jr.

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY

Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

  • relies on unedited copies of the media already being online
  • doesn’t search the entire web
  • doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
  • returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.



Read more:
Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes


Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

  • was it originally made for social media?
  • how widely and for how long was it circulated?
  • what responses did it receive?
  • who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Media, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology, and Paula Dootson, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Facebook vs news: Australia wants to level the playing field, Facebook politely disagrees



Shutterstock

Tim Dwyer, University of Sydney

The Australian government is setting out to develop a “bargaining code” to address power imbalances between news media publishers and digital platforms such as Facebook and Google. The creation of this code was recommended last year in the final report of the Digital Platforms Inquiry held by the Australian Consumer and Competition Commission (ACCC).

The ACCC is planning to publish a draft version of the code at the end of July, but in the meantime it has asked interested parties to contribute their views. Most submissions won’t be made public until the draft code is released, but some stakeholders – including Facebook – have published their submissions themselves.

In Facebook’s submission, it sets out to rebut the ACCC’s understanding of the digital media landscape.

Facebook argues it doesn’t really need news publishers because news content is substitutable, and anyway the platform prioritises content from family and friends in people’s news feeds.

In effect, Facebook is saying it does more good than harm to journalism and news media businesses. The bargaining process hinges on a dispute over the value of news content and exactly what it contributes to the platform’s business – which is currently unclear, particularly to those outside the tent.




Read more:
No more negotiating: new rules could finally force Google and Facebook to pay for news


Valuing news

Facebook’s approach plays into a narrative about how consumers and advertisers migrated to the web in the early 21st century, collapsing the 150-year-old advertising model of newspapers.

Historically, news was the “poor cousin” in direct commercial arrangements between advertisers and newspapers (and later broadcasters). News evolved as byproduct of this exchange and so it remains, secondary to the main game, a kind of subsidy and a “filler” to be used by these giant digital machines of platform capitalism.

But news is also acknowledged as a public good with broader societal benefits. Platforms are slowly realising they cannot avoid regulation to reduce the harms that result from their own market dominance.

Facebook’s chief executive Mark Zuckerberg has identified the platform’s key problematic areas as “harmful content” (such as hate speech and inappropriate imagery) and “election content” (such as targeted political advertising).

Facebook itself has moved from strongly opposing external regulatory interventions to guardedly accepting the idea, as long as the particular regulation suits them.




Read more:
Media Files: ACCC seeks to clip wings of tech giants like Facebook and Google but international effort is required


A strategic rebuttal

In its ACCC submission, Facebook argues it hasn’t contributed to the demise of news businesses by hoovering up advertising revenue. Instead, it points out the rise of the internet had already sent news media into structural decline.

If anyone is to blame, according to Facebook, it is the news businesses themselves who didn’t see the digital tsunami on the horizon.

Unsurprisingly, Facebook does not mention its own substantial market power: with Google, the social media giant carries the bulk of online advertising. As US media scholar Victor Pickard has noted, Facebook and Google between them collect 85% of all growth in digital advertising revenue, leaving very little for news publishers.

Facebook’s take on the news market

Facebook argues the ACCC, the news industry and the rest of us are all suffering from “misconceptions”. In broad terms these are: that Facebook is responsible for the market failure of news; that it “steals” news content and news publishers have no control over its surfacing; and that there’s a value imbalance between the platforms and news media businesses which favours Facebook, and therefore Facebook should compensate the businesses at commercial rates.

However, Australians are increasingly getting their news via social media newsfeeds. Research from the University of Canberra shows the COVID-19 pandemic has boosted this trend, and Reuters has found older Australians too are increasingly using social media as a pathway to news.

Australians are increasingly getting their news via social media.
Shutterstock

Clearly, digital platforms and news media businesses have a symbiotic relationship. But it is far from an equitable one: with a market capitalisation of US$671 billion, annual revenue of more than US$70 billion, and around 1.73 billion users every day, Facebook dwarfs any news media business.

As social media platforms are growing more important when it comes to accessing news, and news is a social good, the ACCC is calling for a more sustainable, if not an aspirationally equitable relationship.

Facebook likes the idea of a new Australian Digital Media Council modelled on the Australia Press Council. It would arbitrate disputes between news media publishers and digital platforms.

But is this a reasonable comparison? Can news publishers be equated with individual complainants who seek remedies?

Trying to dodge responsibility?

The central theme of Facebook’s submission is a refusal to acknowledge there is a power imbalance between news media businesses and Facebook and Google that needs to be addressed.

Facebook questions the idea of even casting their relationship to the news media sector in that way. Indeed, the company appears to be in denial about the simple fact noted by Treasurer Josh Frydenberg’s comment on the handing down of the Digital Platforms Inquiry report:

Make no mistake, these companies are among the most powerful and valuable in the world.

If nothing else, Facebook has demonstrated its well-oiled PR machine and the phalanx of people ready to defend its surging revenue base. Its counter-arguments to the ACCC are evidence of this, and also a determination to maintain absolute algorithmic control over the news feed.

From Facebook’s perspective, a key impact of COVID-19 has been that people are now spending increasing amounts of time on their platform.The Conversation

Tim Dwyer, Associate Professor, Department of Media and Communications, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t be phish food! Tips to avoid sharing your personal information online



Shutterstock

Nik Thompson, Curtin University

Data is the new oil, and online platforms will siphon it off at any opportunity. Platforms increasingly demand our personal information in exchange for a service.

Avoiding online services altogether can limit your participation in society, so the advice to just opt out is easier said than done.

Here are some tricks you can use to avoid giving online platforms your personal information. Some ways to limit your exposure include using “alternative facts”, using guest check-out options, and a burner email.

Alternative facts

While “alternative facts” is a term coined by White House press staff to describe factual inaccuracies, in this context it refers to false details supplied in place of your personal information.




Read more:
Hackers are now targeting councils and governments, threatening to leak citizen data


This is an effective strategy to avoid giving out information online. Though platforms might insist you complete a user profile, they can do little to check if that information is correct. For example, they can check whether a phone number contains the correct amount of digits, or if an email address has a valid format, but that’s about it.

When a website requests your date of birth, address, or name, consider how this information will be used and whether you’re prepared to hand it over.

There’s a distinction to be made between which platforms do or don’t warrant using your real information. If it’s an official banking or educational institute website, then it’s important to be truthful.

But an online shopping, gaming, or movie review site shouldn’t require the same level of disclosure, and using an alternative identity could protect you.

Secret shopper

Online stores and services often encourage users to set up a profile, offering convenience in exchange for information. Stores value your profile data, as it can provide them additional revenue through targeted advertising and emails.

But many websites also offer a guest checkout option to streamline the purchase process. After all, one thing as valuable as your data is your money.

So unless you’re making very frequent purchases from a site, use guest checkout and skip profile creation altogether. Even without disclosing extra details, you can still track your delivery, as tracking is provided by transport companies (and not the store).

Also consider your payment options. Many credit cards and payment merchants such as PayPal provide additional buyer protection, adding another layer of separation between you and the website.

Avoid sharing your bank account details online, and instead use an intermediary such as PayPal, or a credit card, to provide additional protection.

If you use a credit card (even prepaid), then even if your details are compromised, any potential losses are limited to the card balance. Also, with credit cards this balance is effectively the bank’s funds, meaning you won’t be charged out of pocket for any fraudulent transactions.

Burner emails

An email address is usually the first item a site requests.

They also often require email verification when a profile is created, and that verification email is probably the only one you’ll ever want to receive from the site. So rather than handing over your main email address, consider a burner email.

This is a fully functional but disposable email address that remains active for about 10 minutes. You can get one for free from online services including Maildrop, Guerilla Mail and 10 Minute Mail.

Just make sure you don’t forget your password, as you won’t be able to recover it once your burner email becomes inactive.

The 10 Minute Mail website offers free burner emails.
screenshot

The risk of being honest

Every online profile containing your personal information is another potential target for attackers. The more profiles you make, the greater the chance of your details being breached.

A breach in one place can lead to others. Names and emails alone are sufficient for email phishing attacks. And a phish becomes more convincing (and more likely to succeed) when paired with other details such as your recent purchasing history.

Surveys indicate about half of us recycle passwords across multiple sites. While this is convenient, it means if a breach at one site reveals your password, then attackers can hack into your other accounts.

In fact, even just an email address is a valuable piece of intelligence, as emails are used as a login for many sites, and a login (unlike a password) can sometimes be impossible to change.

Obtaining your email could open the door for targeted attacks on your other accounts, such as social media accounts.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


In “password spraying” attacks“, cybercriminals test common passwords against many emails/usernames in hopes of landing a correct combination.

The bottom line is, the safest information is the information you never release. And practising alternatives to disclosing your true details could go a long way to limiting your data being used against you.The Conversation

Nik Thompson, Senior Lecturer, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus Update: Australia


General

Australia

Cyber threats at home: how to keep kids safe while they’re learning online



Shutterstock

Paul Haskell-Dowland, Edith Cowan University and Ismini Vasileiou, De Montfort University

Before COVID-19, children would spend a lot of the day at school. There they would be taught about internet safety and be protected when going online by systems that filter or restrict access to online content.

Schools provide protective environments to restrict access to content such as pornography and gambling. They also protect children from various threats such as viruses and unmoderated social media.

This is usually done using filters and blacklists (lists of websites or other resources that aren’t allowed) applied to school devices or through the school internet connection.

But with many children learning from home, parents may not be aware of the need for the same safeguards.

Many parents are also working from home, which may limit the time to explore and set up a secure online environment for their children.

So, what threats are children exposed to and what can parents do to keep them safe?

What threats might children face?

With an increased use of web-based tools, downloading new applications and a dependence on email, children could be exposed to a new batch of malware threats in the absence of school-based controls.

This can include viruses and ransomware – for example, CovidLock (an application offering coronavirus related information) that targets the Android operating system and changes the PIN code for the lock-screen. If infected, the user can lose complete access to their device.

Children working at home are not usually protected by the filters provided by their school.

Seemingly innocent teaching activities like the use of YouTube can expose children to unexpected risks given the breadth of inappropriate adult content available.

Most videos end with links to a number of related resources, the selection of which is not controlled by the school. Even using YouTube Kids, a subset of curated YouTube content filtered for appropriateness, has some risks. There have been reports of content featuring violence, suicidal themes and sexual references.




Read more:
Can you keep your kids safe watching YouTube?


Many schools are using video conferencing tools to maintain social interaction with students. There have been reports of cases of class-hijacking, including Zoom-bombing where uninvited guests enter the video-conference session.

The FBI Boston field office has documented inappropriate comments and imagery introduced into an online class. A similar case in Connecticut resulted in a teenager being arrested after further Zoom-bombing incidents.




Read more:
‘Zoombombers’ want to troll your online meetings. Here’s how to stop them


Because video conferencing is becoming normalised, malicious actors (including paedophiles) may seek to exploit this level of familiarity. They can persuade children to engage in actions that can escalate to inappropriate sexual behaviours.

The eSafety Office has reported a significant increase in a range of incidents of online harm since early March.

In a particularly sickening example, eSafety Office investigators said:

In one forum, paedophiles noted that isolation measures have increased opportunities to contact children remotely and engage in their “passion” for sexual abuse via platforms such as YouTube, Instagram and random webchat services.

Some families may be using older or borrowed devices if there aren’t enough for their children to use. These devices may not offer the same level of protection against common internet threats (such as viruses) as they may no longer be supported by the vendor (such as Microsoft or Apple) and be missing vital updates.

They may also be unable to run the latest protective software (such as antivirus) due to incompatibilities or simply being under-powered.

Error message when attempting to install a new application on an older device.
Author provided

What can parents do to protect children?

It’s worth speaking with the school to determine what safeguards may still function while away from the school site.

Some solutions operate at device-level rather than based on their location, so it is possible the standard protections will still be applicable at home.

Some devices support filters and controls natively. For example, many Apple devices offer ScreenTime controls to limit access to apps and websites and apply time limits to device use (recent Android devices might have the Digital Wellbeing feature with similar capabilities).

Traditional mechanisms like firewalls and anti-virus tools are still essential on laptops and desktop systems. It is important these are not just installed and forgotten. Just like the operating systems, they need to be regularly updated.

There is a wealth of advice available to support children using technology at home.

The Australian eSafety Commissioner’s website, for instance, provides access to:

But if you’re feeling overwhelmed by these materials, some key messages include:

  • ensuring (where appropriate) the device is regularly updated. This can include updating the operating system such as Windows, Android or Mac

  • using appropriate antivirus software (and ensuring it is also kept up to date)

  • applying parental controls to limit screen time, specific app use (blocking or limiting use), or specific website blocks (such as blocking access to YouTube)

  • on some devices, parental controls can limit use of the camera and microphone to prevent external communication

  • applying age restrictions to media content and websites (the Communications Alliance has a list of accredited family friendly filters)

  • monitoring your child’s use of apps or web browsing activities

  • when installing apps for children, checking online and talking to other parents about them

  • configuring web browsers to use “safe search”

  • ensuring children use devices in sight of parents

  • talking to your children about online behaviours.




Read more:
Children can be exposed to sexual predators online, so how can parents teach them to be safe?


While technology can play a part, ensuring children work in an environment where there is (at least periodic) oversight by parents is still an important factor.The Conversation

Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Ismini Vasileiou, Associate Professor in Information Systems, De Montfort University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

So you’re going to school online – here are 6 ways to make the most of it



Shutterstock

Claire Brown, Victoria University and Rannah Scamporlino, Victoria University

Effective learning is a two-way process between the teacher and students, meaning both need to engage.

If a student simply sits and listens to new information without engaging or applying it, it’s called passive learning. Active learning is where students engage with new learning making connections to concepts they have learned previously.

According to one of the world’s leading university educators, Harvard University’s Professor Eric Mazur, “interactive learning triples students’ gains in knowledge”.

Here are six things students can do while studying online to ensure they are learning actively and and making gains.

1. Organise a learning space and dress for learning

Balancing a laptop on your knees on your bed, or with the television on in the background, is not the best way to study. Students learn best when their learning space minimises distractions.

A good learning space has a table and chair, good lighting, good air flow, is away from distractions like television and noise, has good connectivity for digital devices, and is organised with the usual things students have at school such as pens, paper, calculator and others study materials.

Learning online is like being at school in that you need to be physically and mentally prepared to learn. One study suggests what you wear can affect your attention to a task. So it might help not to be in your pyjamas even if your study space is in your bedroom.

2. Organise your learning time

Students with good time management skills tend to do better academically.

There is no easy answer to how long students should be studying at home each day. Students should plan a study timetable dividing their day into learning, revision and rest blocks.

“Zoom fatigue” has been identified as an emerging problem with online studying and meetings caused by the different ways our brains process information delivered online. One suggestion is an online session should be no longer than 45 minutes with a 15-minute break.




Read more:
Trying to homeschool because of coronavirus? Here are 5 tips to help your child learn


Back-to-back sessions should be avoided and the time between sessions should be used to step away from the computer to rest your brain, body and eyes. It is important to stand up and move around every 30 minutes.

Students should work with teachers to revise their schedule each day and stick to what works for them.

3. Manage distractions

Because students will be studying in an different space, they may get distracted by what other people are doing. If you can, share your study timetable with others in the house, and ask for their help to keep focused.

When you’re in a learning block of time, turn off social media and close browser tabs you don’t need. If you’re using the Google Chrome browser, it has an extension called Stay Focusd. Students can use this to set the period of time to block potential distractions like notifications from Instagram, Snapchat and other applications.

If you are sharing a digital device with other family members, try to agree on a roster that fits with everyone’s timetables. Work out who needs the device at specific times and put that time on a master timetable that is shared by everyone.

4. Take notes

Our memories are not stable and we frequently overestimate how much we can remember. We forget at least 40% of new information within the first 24 hours of first reading or hearing it. That’s why it’s important to take notes.

Use different-colour markers to make connections between concepts.
Shutterstock

Research is unclear about whether it is better to take notes digitally or by hand. Some researchers think it is a matter of preference.

The most important thing is to follow a good note-taking process. This involves:

  • writing an essential question that captures the key learning points of the topic

  • revising your notes. Use different colours and highlighters to make connections between chunks of information; add new ideas and write study questions in the margin. Compare notes with a study buddy to improve and learn from each other

  • writing a summary that links all the information together and answers the essential question you wrote down initially

  • revising your notes within 24 hours, seven days, and then each month until you are tested on that knowledge.




Read more:
What’s the best, most effective way to take notes?


5. Adopt a growth mindset

In the 1990s, American psychologist Carol Dweck developed the theory of the growth mindset.

It grew out of studies in which primary school children were engaged in a task, and then praised either for their existing capacities, such as intelligence, or the effort they invested in the task.

The students who were praised for their effort were more likely to persist with finding a solution to the task. They were also more likely to seek feedback about how to improve. Those praised for their intelligence were less likely to persist with the more difficult tasks and to seek feedback on how their peers did on the task.




Read more:
You can do it! A ‘growth mindset’ helps us learn


The growth mindset assumes capacities can be developed or “grown” through learning and effort. So if you don’t understand something straight away, working at it will help you get there.

If you are engaging in negative self-talk, change the words. For example, instead of saying, “This is too hard”, try saying, “What haven’t I tried yet to figure this out?”

6. Ask questions and collaborate

Ask teachers questions about anything that is unclear as soon as possible. Give teachers frequent feedback. Teachers appreciate suggestions that help improve student learning.

Set up online study groups. Learning is a social activity. We learn best by learning with others, and when learning is fun. Studying with friends helps clarify new concepts and language, and stay connected.The Conversation

Claire Brown, National Director, AVID Australia, Victoria University and Rannah Scamporlino, Education Coordinator, AVID Australia, Victoria University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus: as culture moves online, regional organisations need help bridging the digital divide


Indigo Holcombe-James, RMIT University

Museums, galleries and artist collectives around the world are shutting their doors and moving online in response to coronavirus. But engaging with audiences online requires access, skills and investment.

My research with remote Aboriginal art centres in the Northern Territory and community museums in Victoria shows moving to digital can widen the gap between urban and regional organisations.

Local spaces are vital. They ensure our national story is about more than the metropolitan, allowing artists to create – and audiences to engage with – local art and history. These art centres and museums bring communities together.

This cannot be replicated online.

Australia’s digital divide influences the ability of museums and galleries to move online, and the ability of audiences to find them there.

Cultural organisations that cannot produce digital content risk getting left behind. If we don’t support regional and rural organisations in their move online – or relieve them from this pressure entirely – we run the risk of losing them.

More than metropolitan

Community museums are critical in collecting, preserving and enabling access to local history. Across Victoria, these community organisations hold around 10 million items.

Aboriginal art centres produce some of Australia’s best contemporary art, generating A$53 million in sales between 2008 and 2012.




Read more:
The other Indigenous coronavirus crisis: disappearing income from art


Digital platforms can make these contributions to our cultural life more accessible – particularly in these times of physical distancing. But artists in remote Aboriginal art centres and volunteer retirees running community museums are the most likely to experience digital disadvantage and the most likely to be left behind.

A digital divide

Australians are more likely to be digitally excluded when Indigenous, living in remote areas, or over the age of 65.

Community collecting is under-resourced and so regional museums rely on retiree volunteers.

Over 30% of Indigenous artists practising out of art centres are over 55, and are most likely to be earning from their art over 65. These remote centres have poor access to web-capable devices and have low-quality internet connections.




Read more:
Australia’s digital divide is not going away


The digital divide also exists for local audiences with access issues of their own.

Although most art centres and community museums have active websites and social media accounts, these are unlikely to be truly engaging or interactive.

Art centres tend to focus their digital platforms outside the community on commercial sales. Community museums focus on information about opening hours and events. They rarely have the expertise or capacity to create detailed online catalogues for audiences.

Exclusionary consequences

Cultural participation is fragmented along demographic and geographic lines. Cities house the majority of our major institutions, with city dwellers dominating visitation.

Digital inequality ensures barriers remain even for online collections. Regional and rural organisations are unlikely to have the specific skills, resourcing and devices to move fully online.

Under social distancing, cultural organisations that cannot produce digital content risk being left behind. This will disproportionately impact regional and rural organisations.

These organisations are critical for preserving the diversity of Australian stories. Aboriginal art centres and community museums provide spaces where the local is solidified. Communities are formed, documented, responded to and shared.

If these organisations cannot host the same web presence as major metropolitan institutions, even local audiences could divert their attention to the cities. Our local cultural organisations might go the way of our disappearing regional newspapers.

To survive the coming months, these organisations need targeted support to move online. Or a reprieve from the pressure to be completely digitally accessible: not all cultural consumption can happen online.

These physical community spaces will be more important than ever once social isolation rules are lifted.The Conversation

Indigo Holcombe-James, Sessional academic, School of Media and Communications, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus quarantine could spark an online learning boom


Carlo Perrotta, Monash University

The spread of the coronavirus disease known as COVID-19 is a public health emergency with economic and social ramifications in China and across the world. While the impacts on business are well documented, education is also facing the largest disruption in recent memory.

Institutions around the world are responding to travel bans and quarantines with a shift to online learning. The crisis may trigger an online boom for education – or at least make us more ready to cope with the next emergency.

Education disrupted

As many as 180 million Chinese students – primary, secondary and tertiary – are homebound or unable to travel. In China, the spring semester was originally scheduled to begin on February 17 but has now been postponed indefinitely. In response, Chinese institutions are attempting to switch to online education on a massive scale.

Effects of the epidemic are also being felt closer to home. Australian higher education is increasingly dependent on a steady flow of Chinese students, but the Australian government has restricted travel from China until at least 29 February. At the time of writing, thousands of students are still in limbo.




Read more:
Australian unis may need to cut staff and research if government extends coronavirus travel ban


As a result, Australian higher education institutions are trying to boost their online capacity to deliver courses to stranded concerned students. Some universities – and some parts of universities – are better prepared than others. While all universities use online learning management systems and videoconferencing technology to some degree, there are no mandatory standards for online education.

This makes for a huge variety among institutions and even between individual courses in how digitised they are. To make this worse, not all staff are familiar with (or feel positive about) distance or blended learning.

Will ed-tech ever take off?

Educational technology has historically struggled with large-scale adoption and much has been written about the cycles of boom and bust of the ed-tech industry. It may even be legitimate to ask whether adoption is a goal any longer for many in the industry.

Nowadays, a critical observer could be forgiven for thinking that the most successful ed-tech companies only pay lip service to mass adoption. Instead, their energies are firmly directed at the more remunerative game of (overinflated) start-up funding and selling.

Yet visions of mass adoption are still what drives the volatile dynamics of ed-tech financing. Investors ultimately hope that an innovation will, at some point in the near future, be used by large numbers of students and teachers.

Is the coronavirus a ‘black swan’ for online learning?

In 2014 Michael Trucano, a World Bank specialist on education and technology policy, described the importance of “tipping points” to push educational technology into the mainstream. Trucano suggested that epidemics (he talked about the 2003 SARS epidemic, but the argument applies to COVID-19) could be “black swans”. The term is borrowed from the American thinker Nassim Nicholas Taleb, who uses it to describe unanticipated events with profound consequences.

During the SARS outbreak, according to Trucano, China was forced into boosting alternative forms of distance education. This led to pockets of deeper, more transformational uses of online tools, at least temporarily. The long-term effects are still unclear.




Read more:
The coronavirus outbreak is the biggest crisis ever to hit international education


The current landscape of global digital education suggests COVID-19 may result in more robust capabilities in regions with enough resources, connectivity and infrastructure. However, it is also likely to expose chronic deficiencies in less prepared communities, exacerbating pre-existing divides.

Investors appear to see this as a moment that could transform all kinds of online activity across the region. The stocks of Hong Kong-listed companies linked to online games, digital medical services, remote working and distance education have soared in recent days.

Online drawback

Adding to the complexity, students do not always welcome digital education, and research shows they are less likely to drop out when taught using “traditional” face-to-face methods.

Indeed, studies on the effectiveness of “virtual schools” have yielded mixed results. A recent study focusing on the US recommended virtual schools be restricted until the reasons for their poor performance are better understood.

Students may also oppose online learning because they perceive it as a sneaky attempt at forcing education down their throats. This may be what happened recently when DingTalk, a large Chinese messaging app, launched e-classes for schools affected by the coronavirus emergency. Unhappy students saw their forced vacation threatened and gave the app a bad rating on online stores in an attempt to drive it out of search results.

Perhaps this last story shouldn’t be taken too seriously, but it does highlight the importance of emotional responses in attempts to scale up an educational technology.

A permanent solution or a crisis response tool?

The importance of distance education in an increasingly uncertain world of global epidemics and other dramatic disruptions (such as wars and climate-related crises) is without doubt. So-called “developing countries” (including large rural regions in the booming Indian and Chinese economies) can benefit greatly from it, as it can help overcome emergencies and address chronic teacher shortages.

Once the current crisis passes, however, will things go “back to normal”? Or will we see a sustained increase in the mainstream adoption of online learning?

The answer is not at all obvious. Take Australia, for example. Even if we assume the COVID-19 emergency will lead to some permanent change in how more digitally-prepared Australian universities relate to Chinese students, it’s unclear what the change will look like.

Will we see more online courses and a growing market for Western-style distance education in Asia? Is this what the Chinese students (even the tech-savvy ones) really want? Is this what the Chinese economy needs?

Alternatively, perhaps, the crisis might lead to a more robust response system. Universities might develop the ability to move online quickly when they need to and go back to normal once things “blow over”, in a world where global emergencies look increasingly like the norm.The Conversation

Carlo Perrotta, Senior lecturer, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

6 actions Australia’s government can take right now to target online racism



Paul Fletcher, Australia’s recently appointed minister for communications, cyber safety and the arts, says he wants to make the internet safe for everyone.
Markus Spiske / unsplash, CC BY

Andrew Jakubowicz, University of Technology Sydney

Paul Fletcher was recently appointed as Australia’s Minister for Communications, Cyber Safety and the Arts.

One of his stated priorities is to:

continue the Morrison Government’s work to make the internet a safer place for the millions of Australians who use it every day.

Addressing online racism is a vital part of this goal.

And not just because racism online is hurtful and damaging – which it is. This is also important because sometimes online racism spills into the real world with deadly consequences.




Read more:
Explainer: trial of alleged perpetrator of Christchurch mosque shootings


An Australian man brought up in the Australian cyber environment is the alleged murderer of 50 Muslims at prayer in Christchurch. Planning and live streaming of the event took place on the internet, and across international boundaries.

We must critically assess how this happened, and be clearheaded and non-ideological about actions to reduce the likelihood of such an event happening again.

There are six steps Australia’s government can take.

1. Reconsider international racism convention

Our government should remove its reservation on Article 4 of the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD).

In 1966 Australia declined to sign up to Article 4(a) of the ICERD. It was the only country that had signed the ICERD while deciding to file a reservation on Article 4(a). It’s this section that mandates the criminalisation of race hate speech and racist propaganda.

The ICERD entered into Australian law, minus Article 4(a), through the 1975 Racial Discrimination Act (RDA).

Article 4 concerns, such as they were, would enter the law as “unlawful” harassment and intimidation, with no criminal sanctions, twenty years later. This occurred through the 1996 amendments that produced Section 18 of the RDA, with its right for complainants to seek civil solutions through the Human Rights Commission.

With Article 4 ratified, the criminal law could encompass the worst cases of online racism, and the police would have some framework to pursue the worst offenders.




Read more:
Explainer: what is Section 18C and why do some politicians want it changed?


2. Extend international collaboration

Our government should extend Australia’s participation in the European cybercrime convention by adopting the First Additional Protocol.

In 2001 the Council of Europe opened the Budapest Convention on Cybercrime to signatories, establishing the first international instrument to address crimes committed over the internet. The add-on First Additional Protocol on criminalisation of acts of a racist and xenophobic nature came into effect in 2002.

Australia’s government – Labor at the time – initially considered including the First Additional Protocol in cyber crime legislation in 2009, and then withdrew it soon after. Without it, our country is limited in the way we collaborate with other country signatories in tracking down cross border cyber racism.

3. Amend the eSafety Act

The Enhancing the Online Safety of Australians Act (until 2017 Enhancing the Online Safety of Children Act) established the eSafety Commissioner’s Office to pursue acts which undercut the safe use of the internet, especially through bullying.

The eSafety Act should be amended by Communications Minister Fletcher to extend the options for those harassed and intimidated, to include provisions similar to those found in NZ legislation. In effect this would mean people harassed online could take action themselves, or require the commissioner to act to protect them.

Such changes should be supported by staff able to speak the languages and operate in the cultural frames of those who are the most vulnerable to online race hate. These include Aboriginal Australians, Muslims, Jews and people of African and Asian descent.

4. Commit to retaining 18C

Section 18C of the RDA, known as the racial vilification provisions, allows individuals offended or intimidated by online race hate to seek redress.

The LNP government conducted two failed attempts over 2013-2019 to remove or dilute section 18C on grounds of free speech.

Rather than just leaving this dangling into the future, the government should commit itself to retaining 18C.

Even if this does happen, unless Article 4 of the (ICERD) is ratified as mentioned above, Australia will still have no effective laws that target online race-hate speech by pushing back against perpetrators.

Legislation introduced by the Australian government in April 2019 does make companies such as Facebook more accountable for hosting violent content online, but does not directly target perpetrators of race hate. It’s private online groups that can harbour and grow race hate hidden from the law.




Read more:
New livestreaming legislation fails to take into account how the internet actually works


5. Review best practice in combating cyber racism

Australia’s government should conduct a public review of best practice worldwide in relation to combating cyber racism. For example, it could plan for an options paper for public discussion by the end of 2020, and legislation where required in 2021.

European countries have now a good sense of how their protocol on cyber racism has worked. In particular, it facilitates inter-country collaboration, and empowers the police to pursue organised race hate speech as a criminal enterprise.

Other countries such as New Zealand and Canada, with whom we often compare ourselves, have moved far beyond the very limited action taken by Australia.

6. Provide funds to stop racism

In conjunction with the states plus industry and civil society organisations, the Australian government should promote and resource “push back” against online racism. This can be addressed by reducing the online space in which racists currently pursue their goals of normalising racism.

Civil society groups such as the Online Hate Prevention Institute and All Together Now, and interventions like the currently stalled NSW Government program on Remove Hate from the Debate, are good examples of strategies that could achieve far more with sustained support from the federal government.

Such action characterises many European societies. Another good example is the World Wide Web Foundation (W3F)) in North America, whose #Fortheweb campaign highlights safety issues for web users facing harassment and intimidation through hate speech.




Read more:
Racism in a networked world: how groups and individuals spread racist hate online


Slow change over time

Speaking realistically, the aim through these mechanisms cannot be to “eliminate” racism, which has deep structural roots. Rather, our goal should be to contain racism, push it back into ever smaller pockets, target perpetrators and force publishers to be far more active in limiting their users’ impacts on vulnerable targets.

Without criminal provisions, infractions of civil law are essentially let “through to the keeper”. The main players know this very well.

Our government has a responsibility to ensure publishers and platforms know what the community standards are in Australia. Legislation and regulation should enshrine, promote and communicate these standards – otherwise the vulnerable remain unprotected, and the aggressors continue smirking.The Conversation

Andrew Jakubowicz, Emeritus Professor of Sociology, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.