Tecnologie avanzate: problemi metodologici ed evidenze Mauro Tettamanti, Unità di Epidemiologia Geriatrica IRCCS – Istituto di Ricerche Farmacologiche Mario Negri
• App per la salute • Come valutare se (un’app) funziona veramente • App per la demenza (Computerized Cog Training) • Computer, internet e caregiver • Dispositivi indossabili, geofencing e demenza • Big data
App per la salute App per il benessere generale, relative a malattie specifiche, per i medici, equivalenti a dispositivi medici
Definizione e regolamenti Leggi sulla privacy e Direttiva europea sulla pubblicità ingannevole Direttiva europea sui dispositivi medici, applicabile se l’app è usata per: “diagnosi, prevenzione, monitoraggio, trattamento o prevenzione di una malattia, lesione o handicap oppure per investigare, sostituire o modificare l’anatomia o un processo fisiologico o controllare il concepimento” Alcuni regolatori: (UK) Medicines and Healthcare Products Regulatory Agency (MHRA) (USA) Food and Drug Administration (FDA) EU: http://ec.europa.eu/growth/sectors/medical-devices/ regulatory-framework/index_en.htm
App per la salute Ci sono più di 150.000 app per la salute disponibili attualmente in Europa In tutto il mondo ci sono stati più di 100 miliardi di scaricamenti di app per la salute
Armstrong, BMJ 2015;351:h4597
BMJ 2015;351:h4597 doi: 10.1136/bmj.h4597 (Published 9 September 2015)
Page 1 of 3
Feature
FEATURE
Funzionano?
HEALTH AND TECHNOLOGY
Which app should I use? Patients and doctors are making increasing use of health apps, but there is little guidance about how well they work. Stephen Armstrong reports
Valutazione in termini di efficacia (e sicurezza) Per avere il marchio CE (app per la salute) è necessario (auto)certificare di rispettare una serie di standard (ISO o IEC) Attualmente FDA ed EU non richiedono studi clinici controllati Qualcuno potrebbe prendersi cura delle recensioni? Il caso Happtique Crowdsourced feedback?
Stephen Armstrong freelance journalist, London, UK There are well over 150 000 health apps available in Europe1—from those designed to improve general wellness to apps that monitor medical conditions, apps for clinicians, and apps that function as medical devices. There have been more than 102 billion downloads of health apps worldwide yet there is little regulation or guidance available for doctors or patients on quality, safety, or efficacy.
What is the problem?
Since the UK government founded the Cochrane Centre in 1992, evidence based medicine has been at the heart of healthcare.2 With the burgeoning apps market, however, things are different. “There’s a huge and growing number of health apps out there, and with that comes a wide variation in quality, testing, and evaluation,” says Sarah Williams, senior health information officer at Cancer Research UK. “As with any new technology there’s a lot we still need to understand about whether they can be effective, especially in the long term, and, perhaps more importantly, whether they’re helping the people who really need it.”
Technically any app that makes efficacy claims needs to be able to produce some evidence to support its claims under the European Directive on Misleading Advertising. This may, however, not include the high quality evidence from randomised controlled trials that clinicians have come to expect. “People are increasingly using apps to monitor, manage, and even treat conditions but have no information on whether or how the apps have been calibrated, and it’s hard to find any information on the research used in development,” explains Patricia Wilkie, chair of the National Association for Patient Participation. “It’s a very murky area.”
Risks of uninformed choice “People choose health apps the same way as they choose any apps—quality of design and how easy they are to use,” explains Satish Misra, cardiology fellow at the Johns Hopkins Hospital in Baltimore and managing editor at app review site iMedicalApps. “That’s why many of the top downloaded apps are not evidence based—and some don’t even make sense.”
Six months ago Jeremy Wyatt, chair in eHealth Research at the Leeds Institute of Health Sciences and clinical adviser on new technology to the Royal College of Physicians, tested 19 apps on the iTunes store—both free and paid for—that claimed to diagnose the user’s risk of a heart attack over the next 10 years. “Given the same data, one app gave a risk of 19%, another gave a risk of 96%, and a third gave a risk of 137%,” he explains. “There was bad coding and poor research in half of the apps tested, with paid for apps performing worse than free apps.”
Yet patients and health professionals are increasingly using medical apps. Studies report that over 85% of health professionals use a smartphone and 30-50% use medical apps in clinical care.3 4 A survey of 233 NHS general surgical trainees working in Scotland5 found 82% had downloaded at least one medical app, 35% had used apps to help make clinical decisions, and 13% thought they had encountered errors. Some 58% thought that apps should be compulsorily regulated but none knew the name of any regulatory body.
What is the current system? All apps are regulated by the Data Protection Act and the European Directive on Misleading Advertising. In addition, the European Medical Device Directive considers apps used in “diagnosis, prevention, monitoring, treatment, or alleviation of disease, injury or handicap as well as investigating, replacing or modifying the anatomy or a physiological process or controlling conception” to be medical devices.6 These are regulated by the Medicines and Healthcare Products Regulatory Agency (MHRA) and have to undergo a conformity assessment by MHRA notified bodies to secure a CE certificate. The Mersey Burns App, which calculates how much fluid a burns victim needs, was the first to win a CE mark. It was accredited after clinical trials showed it was more accurate and quicker at calculating the result than doctors working out the figure by hand. It’s hard to get a clear picture on how many apps have a CE mark—an MHRA spokesman said that the authority kept no register or list of CE marked apps, leaving that to the numerous notified bodies (there are 15 in the United Kingdom alone) individually.
[email protected]
Armstrong, BMJ 2015;351:h4597 Powell, JAMA 2014; 311:1851
For personal use only: See rights and reprints http://www.bmj.com/permissions
Subscribe: http://www.bmj.com/subscribe
MEDICINE AND THE MEDIA
How do we know whether medical apps work? Smartphone apps have the potential to transform the way the public manage their health and interact with health services, says Margaret McCartney, but regulation of medical apps has only just started
Bisogna preoccuparsi?
Margaret McCartney general practitioner, Glasgow Angry Birds, Cut the Rope, and Fruit Ninja are favourite games among smartphone owners, but many apps are for function rather than fun, such as maps and shopping lists, and a host of medical apps that say they offer us ways to better health.
Some are aimed at healthcare professionals but are available to all. The National Institute for Health and Clinical Excellence (NICE), the Scottish Intercollegiate Guidelines Network, and the British National Formulary have free apps allowing easy and rapid access to their advice. But other apps do not just reproduce advice available elsewhere. In January the UK Medicines and Healthcare Products Regulatory Agency (MHRA) approved its first app: Mersey Burns is a free tool that calculates burn area percentages and fluid requirements.
• interattività: non sono libri • a portata di mano e pronte ad accettare nuovi dati: non sono siti web • Queste app potrebbero essere testate in situazioni di vita reale per valutare le evidenze di presenza di efficacia e mancanza di effetti collaterali. • Cerchiamo l’evidenza?
Other medical apps are aimed at the public. Many advise on diet and exercise, and these vary widely in quality,1 2 but newer apps purport to help diagnosis. The NHS Healthcare Innovation Expo this month featured an app from Skin Analytics that offers to track changes in skin moles to “raise early warning signs” by comparison with an online database.3 Its website says that, for £30 a year for an individual or £50 for a family, the app can “baseline you and your family” using “patent pending technology” that can “detect small changes in both the geometrical structure and colour composition of your moles with an exceptional 95% accuracy.”4
A recent study in JAMA Dermatology showed that most previously marketed apps had a failure rate in melanoma diagnosis of about 30%.5 Julian Hall, director of Skin Analytics, said that this app, which is not yet available to buy, was not a diagnostic service but was instead “trying to implement the self examination advice from public health bodies and answer the question, ‘Has the lesion changed or not changed?’—prompting people to see their GP or dermatologist.” Clinical trial data on the app are lacking, but Hall says a trial is planned for later this year. Yet the question of evidence is crucial. Do apps offer to gather more, or misleading, data for little useful signal? Several apps offer to check pulse rate using a phone’s camera light. One app claims 25 million users after promotion in the United States,6 with the ability to record serial pulse rates, but
it is not clear what advantage this offers over manual pulse measurement, should this be desired. It is also possible to buy a small plug-in device that turns your phone into a pulse oximeter, although this is described as “not for medical use” and is marketed as useful for mountain climbers or private pilots and retails at about $250 (£165, €190).7 Some free apps offer “health checks” that are really just adverts for cosmetic surgery. Specsavers, which the BMJ recently reported had been advertising for contracted NHS services,8 offers a free app described as a “sight check.” Users cover an eye, and test their visual acuity with images on the phone. (Despite having had a recent prescription, I was still “strongly recommended” to speak to my optometrist.)
The interactivity that apps provide based on information entered makes them distinct from books or leaflets, and the handheld nature and additional recording offered is different from the reach of websites. This can widen the potential for unintended outcomes. The NHS Commissioning Board last week launched a “library of NHS-reviewed phone apps to keep people healthy” because they are “committed to improving outcomes for patients through the use of technology.” More than 70 have been approved in a review that includes a “clinical assurance team,” to ensure that they “comply with trusted sources of information, such as NHS Choices,” with assessment of the potential to “cause harm to a person’s health or condition.”9 However, a high standard of evidence should surely be crucial in a product approved by the NHS. For example, the charity Beat Ovarian Cancer offers a “symptom tracker,” which “helps women recognise the signs and symptoms of ovarian cancer,” but, without real world trials to show effects and quantify harms, we do not know whether this is beneficial. The NHS Commissioning Board said that, through its review process, it is “ensuring that the apps listed in the Library are clinically safe and suitable for people who are living in the UK,” and that apps “have been checked by the NHS and adhere to NHS safety standards.” Yet these apps could be tested in a real life situation for evidence of benefit and free of unintended harms. Why not? Another NHS recommended app is iBreastcheck, which can be set to remind women to check their breasts weekly, fortnightly,
[email protected] For personal use only: See rights and reprints http://www.bmj.com/permissions
Subscribe: http://www.bmj.com/subscribe
McCartney, BMJ 2013; 346:f1811
L’evidenza (1) Non tutti gli studi sono creati uguali … Gradi di evidenza:
singolo caso serie di casi (osservazionali) studio clinico controllato randomizzato (RCT) meta-analisi di studi clinici controllati
I problemi (r)esistono… Ad esempio: studi non pubblicati spedizioni di pesca
L’evidenza (2) Problemi di generalizzabilità/ applicabilità
• effetti dei criteri di inclusione/esclusione • differenze nella definizione dell’esito • le popolazioni possono avere caratteristiche diverse: • sociali • ambientali • genetiche
e oltre l’evidenza … per le app, in generale per tutto quanto riguarda mHealth e eHealth oltre all’evidenza clinica sono necessarie • interoperabilità (scambio di dati fra sistemi diversi) • standard aperti (per permettere a tutti l’accesso)
Editorial
A Reality Checkpoint for Mobile Health: Three Challenges to Overcome The PLOS Medicine Editors* The use of mobile electronic devices to support medical or public health practice, or m-health, is currently a hot topic. It has been predicted that by 2017 there will be ‘‘more mobile phones than people’’ on the planet [1], and currently three-quarters of the world’s population have access to a mobile phone [2]. The World Health Organization (WHO) has announced [3] that m-health has the ‘‘potential to transform the face of health service delivery across the globe,’’ and there is increasing media and consumer interest in m-health, illustrated for example by a recent panel at the US Consumer Electronics Show on ‘‘The Digital Health Revolution’’ [4]. Survey data illustrates that most regions of the world, including many low- and middle-income countries, are actively working on m-health pilot projects or have set up systems, for example, for managing treatment compliance, sending appointment reminders, or conducting surveys [3]. However, amidst the interest (and, possibly, a bit of hype) it is worth considering whether m-health needs a reality check. Recently in PLOS Medicine, an Essay by Mark Tomlinson and colleagues [5] highlighted the proliferation of m-health pilots in many countries. However, in the Essay, Tomlinson and colleagues comment that few pilots move forward to scale-up, and there is little evidence to inform whether, when, and how, pilots might expand countrywide. Tomlinson and colleagues also raise concerns regarding the increasing interest in m-health from industry, which is likely to have very different motivations than would patients or those responsible for safeguarding public health. At a WHO forum on data standards for e-health (defined as the use of electronic processes and communication to support health care), held in December 2012 [6] (where one of us, EV, participated), countries reported that a panoply of proliferating standards exists, many of which are closed standards, with high barriers to access; barriers include not just cost, but also the technical complexity of systems and standards and language differences. Consequently, in the rush to develop new applications, many countries
end up with a fragmented patchwork of systems, which do not talk to each other. Although these concerns were raised in the context of e-health, the same issues also apply to m-health. With this in mind, we set out three key challenges that advocates will need to overcome to fulfill the promise of m-health.
Reality Check 1: Are Your Systems Interoperable? Interoperability refers to those properties of systems (whether software, communications, or other systems), that enable the exchange of data among systems in common formats, the use of common protocols, and ultimately the ability to work together. Interoperability is a critical issue for m-health (and for e-health more generally), because patients may have multiple clinical needs and conditions at one time, and will interact with the health systems via multiple points, providers, and professionals [7]. Although many m-health applications may appear simple (e.g., a system sending texts to patients reminding them of their next appointment), such systems will have greatest potential for wider use if they can easily and accurately exchange information with other systems—for example, microscopy images taken using a mobile phone can be securely imported back into the electronic health record. A vision for interoperability [8] has set out what should live inside the interoperable ‘‘core’’ of m-health systems: standards that govern health data concepts, patient
identity, data processing protocols, and mechanisms for secure sharing of patient data that preserve confidentiality. However, while common standards for all these uses do not yet exist, there is light at the end of the tunnel. As Estrin and Sim have emphasized [8], there are critical differences between the e-health and m-health fields. M-health has fewer entrenched systems than does e-health, and therefore potentially fewer legacy barriers to overcome in order to establish a new shared architecture.
Reality Check 2: Are You Using Open Standards? Open standards and interoperability go hand in hand, although these terms refer to different properties of a system [8]. There is no single definition of ‘‘open standard,’’ but generally this term is taken to imply that the standard is publicly available, information about its use and application is available, there are no fees for use of the standard, and the standard was developed using a consensus process [9]. Multiple ‘‘standards’’ exist in e- and m-health—for example, Health Level Seven (HL7) refers to a set of rules that govern how health-care systems exchange information with each other. SNOMED CT is a coded taxonomy which is used to define healthcare information concepts (e.g., to define diseases, findings, procedures, and so on. However, although both examples have been developed by nonprofit organizations, neither is yet freely available for use (although HL7 hopes to
Citation: The PLOS Medicine Editors (2013) A Reality Checkpoint for Mobile Health: Three Challenges to Overcome. PLoS Med 10(2): e1001395. doi:10.1371/journal.pmed.1001395 Published February 26, 2013 Copyright: ! 2013 PLoS Medicine Editors. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: The authors are each paid a salary by the Public Library of Science, and they wrote this editorial during their salaried time. Competing Interests: The authors’ individual competing interests are at http://www.plosmedicine.org/static/ editorsInterests.action. PLOS is funded partly through manuscript publication charges, but the PLOS Medicine Editors are paid a fixed salary (their salary is not linked to the number of papers published in the journal). * E-mail:
[email protected] The PLOS Medicine Editors are Virginia Barbour, Jocalyn Clark, Laureen Connell, Amy Ross, Paul Simpson, Emma Veitch, and Margaret Winker.
PLoS Medicine Editors, PLoS Medicine 2013; 10:e1001395 PLOS Medicine | www.plosmedicine.org
Provenance: Written by editorial staff; not externally peer reviewed.
1
February 2013 | Volume 10 | Issue 2 | e1001395
App (CCT) e prevenzione demenza (1) Meta-analisi di RCT (con gruppo di controllo) sull’utilizzo di Training Cognitivo Computerizzato Popolazione: soggetti anziani sani Efficacia a breve termine 51 studi (4885 partecipanti) Lampit, PLOS Medicine 2014; e1001756
App (CCT) e prevenzione demenza (2)
Meta-Analysis of Computerized Cognitive Training in Older Adults
effetto globale: significativo, ≈ +1 MMSE singoli domini: quasi tutti significativi e di grandezza modesta sottogruppi: sì se trattati da 1 a 3 v./sett., no se > 3 v./sett. sì se trattamento multidominio, attenzione, velocità o video-game; no se specifico per working memory sì se trattati in gruppo, no se solo a casa effetto specifico? effetto sociale?
Figure 2. Overall efficacy of CCT on all cognitive outcomes. Effect estimates are based on a random-effects model, and studies are rankordered by year of publication. doi:10.1371/journal.pmed.1001756.g002
PLOS Medicine | www.plosmedicine.org
6
November 2014 | Volume 11 | Issue 11 | e1001756
Lampit, PLOS Medicine 2014; e1001756
App e prevenzione demenza (3) • Lumos Labs: giochi digitali (a pagamento) di “allenamento del cervello basati sulla scienza della neuroplasticità” • valutato con un “RCT” su quasi 5000 soggetti • risultato: meglio del classico cruciverba (sempre a breve termine)
Enhancing Cognitive Abilities with Comprehensive Training
Fig 1. CONSORT flow chart of participants in the study. doi:10.1371/journal.pone.0134467.g001
participant, upon logging in each day, to either cognitive training or crossword puzzles based on his or her group assignment. However, in some cases participants in the crossword control group were able to access cognitive training. As a result, 330 control participants were removed from the primary analysis because they accessed the cognitive training program during the study period (Fig 1). See Table 1 for demographic characteristics of the fully evaluable cohorts in both conditions. Age, gender, and educational attainment were evenly distributed across the groups.
Treatment and control groups All participants were instructed to log into the website and do one session per day of their activity (cognitive training for the treatment group or crossword puzzles for the control group), 5 days a week for 10 weeks. Daily email participation reminders were sent to all participants during the study period. Cognitive training treatment. The Lumosity cognitive training program was used as the treatment condition in this study. Treatment participants in this study received the same training experience that Lumosity subscribers received over the same period of time. Daily training sessions included five cognitive training tasks. On any given day, the five tasks for that particular session were chosen by an algorithm that attempted to optimize a balance of training activities such that tasks were presented in clusters across days without repeating individual tasks on a given day. One five-task session typically took approximately 15 minutes to complete. Outside of this session, participants could opt to do additional training with any of the 49 available
Hardy, PLoS One 2015; 10:e0134467
NEUROSCIENCE
Regulators seek to tame brain training’s ‘Wild West’ $2 million fine for Lumosity is latest shot at claims of cognitive benefits from games and apps By Emily Underwood
I
f you watch cable TV news or listen to NPR, you’ve likely been barraged with ads for Lumosity, a set of digital “braintraining” games that, for $14.95 a month, purportedly sharpens the mind based on the “science of neuroplasticity.” By playing the games for just 10 to 15 minutes per day several days a week, the ads claim, consumers can improve their performance in work and school, and perhaps even stave off Alzheimer’s and other serious medical conditions. Last week Lumosity hit the news for a different reason, as the Federal Trade Commission (FTC) made it the latest target in a crackdown on companies selling products that purportedly enhance memory, provide some other cognitive benefit, or reduce the serious side effects of dementia. It fined the 212
games’ maker, Lumos Labs, Inc., $2 million for false advertising and required it to create a pop-up screen that alerts players to FTC’s order and allows them to avoid future billing. It’s the third FTC complaint against the industry in 4 months, and many neuroscientists and psychologists say action is long overdue. Still, some worry that games based on solid science may be unfairly tarnished, and that the agency may be imposing a standard of evidence that game developers can’t meet. Both FTC and the Food and Drug Administration (FDA) have authority to regulate brain-training games and apps, but FTC is particularly interested in deceptive advertising practices, says Michelle Rusk, a spokesperson with FTC in Washington, D.C. For some time now, FTC has been “concerned about some of the claims we’re seeing out
there,” she says. After evaluating the few available studies on Lumos Labs’s products as well as the broader literature on braintraining games, “our assessment was they didn’t have adequate science for the claims that they’re making,” she says. “The most that they have shown is that with enough practice you get better on these games, or on similar cognitive tasks … there’s no evidence that training transfers to any real-world setting.” Susanne Jaeggi, a psychologist at the University of California (UC), Irvine, agrees, saying, “there’s little to no evidence” Lumosity works the way the company says it does. Lumos Labs’s claims of benefitting seriously ill people, and its use of testimonials solicited from customers via prize contests, are “egregious” examples of irresponsible advertising, adds Michael Merzenich, a neuroscientist at UC San Francisco and co-founder of Posit Science, which develops a different type of brain-training software. Although Lumos Labs is not granting interviews, a company statement says its games are based on rigorous science. (One of its owners, Michael Scanlon, left a Stanford University Ph.D. program in neuroscience in 2005 to start the company.) The statement cites the company’s recent study in the journal PLOS ONE, showing that 4700 participants who trained with Lumosity for 10 weeks showed small improvements on an aggregate assessment of cognition. Although Jaeggi is skeptical of Lumos Labs’s claims, she says “we have to be careful not to overgeneralize.” There is “growing evidence” that brain-training games related to skills such as working memory— the short-term ability to learn and use new information—“can be beneficial for a variety of tasks,” she says. Some 70 prominent researchers came to the defense of Carrot Neurotechnology in 2015 after FTC fined Aaron Seitz, a psychologist at UC Riverside, and his partner $75,000 each for ads promoting the company. Now called Ultimeyes, the company sells a training program aimed at improving visual acuity by increasing the efficiency with which the brain processes information from the eyes. The agency took aim at assertions that the app improved acuity by an average of 31%—a figure plucked from Seitz’s small, peer-reviewed study in Current Biology in February 2014, which tracked a baseball team’s performance after training on the app. Such claims must be supported with statistically significant results from randomized, blinded, placebo-controlled human clinical trials, the agency said. Although some of Ultimeyes’s claims were too strong, its defenders concede, they say the app’s basic scientific premise is based on decades of research in vision science and has been vetted by top experts in the field. sciencemag.org SCIENCE
15 JANUARY 2016 • VOL 351 ISSUE 6270
Published by AAAS
Underwood, Science 2016; 351:212
Downloaded from on February 2, 2016
• giro d’affari: 1 miliardo di $ • reclamizzati come: “allenare il cervello contro l’Alzheimer” • la Federal Trade Commission ha comminato una multa da 2 milioni $ (pubblicità ingannevole)
NEWS | IN DEPTH
ILLUSTRATION: ADAPTED FROM MUSTAFAHACALAKI/ISTOCKPHOTO.COM BY G. GRULLÓN/SCIENCE
App e prevenzione demenza (4)
App e prevenzione demenza (5)
www.lumosity.com (8 marzo 2016)
Cognitive training e declino cognitivo (1) Computerized Cognitive Training/Virtual Reality Cognitive Training (training sistematico e standardizzato su compiti mentali, volto ad ottimizzare la funzionalità cognitiva) Popolazione: MCI/AD Interventi misti, di durata variabile (da 4 a 420 h) Controlli misti (da nulla a uso passivo computer) >> Studi preliminari (40 soggetti totali in media): 16 studi con 664 soggetti in totale Coyle, Am J Geriatr Psychiatry 2015; 23:335
Cognitive training e declino cognitivo (2) meta-analisi non effettuabile a causa delle grandi differenze nel disegno degli studi e nelle misure di esito effetto positivo (non sempre significativo) su misure specifiche (memoria, attenzione, funzioni esecutive) funzione cognitiva globale (MMSE o simili) migliora significativamente, anche se non di molto (forse si mantiene) Depressione/ansia: miglioramento (forse) ADL: nessun miglioramento Coyle, Am J Geriatr Psychiatry 2015; 23:335
Cognitive training e declino cognitivo (3)
Coyle, Am J Geriatr Psychiatry 2015; 23:335
“Computer” e caregiver Interventi psico-sociali su caregiver informali di persone affette da demenza effettuati tramite computer
“Computer” e caregiver (1) scopi: riduzione stress e aumento capacità di gestire le situazioni problematiche, soprattutto non esserne schiacciato mezzi (criteri di inclusione nella revisione): chat/ videoconferenze di gruppo, educazione con DVD/PC o siti web, metainformazioni su ICT, con eventuale terapia professionale a distanza tipo di intervento: risposte a problematiche specifiche interventi di durata variabile (da 1 a 12 mesi) RCT ma non solo McKechnie, Int Psychoger 2014; 26:1619
“Computer” e caregiver (2) Risultati: 14 studi inclusi (1165 soggetti) Depressione: miglioramento, anche se non in tutti gli studi, con una qualche evidenza di dose-risposta. Forse anche ansia (meno studiata) QoL/Salute: nessuna modifica Stress: miglioramento, forse dose-risposta
McKechnie, Int Psychoger 2014; 26:1619
“Computer” e caregiver (3) Problemi: senza calcolo della potenza esiti non sempre chiari, o non conseguenti a quanto dovrebbe fare l’intervento risultati (su tutto quanto è stato valutato) non sempre presenti nell’articolo
McKechnie, Int Psychoger 2014; 26:1619
Internet e caregiver Strumenti disponibili esclusivamente via internet Revisione sistematica di articoli (intersezione con il precedente) Risultati simili “Mancanza di metodologia di qualità” “C’è bisogno di nuovi studi (RCT) condotti su protocolli specifici per dare delle risposte più precise” Boots, Int J Geriatr Psychiatry 2014; 29:331
Dispositivi indossabili
Problematiche generali Produzione e comunicazione di grandi quantità di dati (big data) 1. nuove reti che supportino il nuovo traffico generato 2. controllo della sicurezza dei dati: sottrazione dei dati dai server, durante il tragitto, manipolazione dei dati manipolazione dei sensori o degli attuatori interazione con la vita quotidiana (luce solare, temperature estreme, altitudine, altri sensori, …) Austen, Nature 2015; 525:22 Mombers, Br J Clin Pharmacol 2015; 81:196
Sensoristica Attualmente il grosso degli indossabili va ai “salutisti” Quale uso in medicina? RCT: pedometri per scarsa attività fisica (ma follow-up scarso) telemonitoraggio diabete (ma le caratteristiche psicologiche del paziente contano) Mancano i dati di efficacia! I dati personali sono spesso di proprietà di chi fabbrica gli indossabili!
Fig 1. What can consumer wearables do? Heart rate can be measured with an oximeter built into a ring [3], muscle activity with an electromyogr sensor embedded into clothing [4], stress with an electodermal sensor incorporated into a wristband [5], and physical activity or sleep patterns via accelerometer in a watch [6,7]. In addition, a female’s most fertile period can be identified with detailed body temperature tracking [8], while levels o attention can be monitored with a small number of non-gelled electroencephalogram (EEG) electrodes [9]. Levels of social interaction (also known general well-being) can be monitored using proximity detections to others with Bluetooth- or Wi-Fi-enabled devices [10]. Consumer wearables can personalised, immediate, and goal-oriented feedback based on specific tracking data obtained via sensors and provide long lasting functionality w requiring continual recharging. Their small form factor makes them easier to wear continuously. While smartphones are still required to process the data for most consumer wearables, it is conceivable that in the near future all processing functionality will be self contained. doi:10.1371/journal.pmed.1001953.g001
manufacturers utilise a range of digital persuasive techniques and social influence stra increase user engagement, including the gamification of activity with competitions an lenges, publication of visible feedback on performance utilising social influence princi reinforcements in the form of virtual rewards for achievements. There is also a small, ing, population of wearable users specifically interested in the concept of self-discover sonal analytics—the Quantified Self (QS) movement [11]. A number of scientific and publications describe methods and techniques for using consumer wearables as “selfdevices—to improve sleep, manage stress, or increase productivity [12]. But do these i tions make people healthier? Current empirical evidence is not supportive. Evidence for the effectiveness of QS comes from single-subject reports of users describing their experiences. Subjective rep these cannot be treated as reliable scientific evidence. Very few longitudinal, randomi trolled studies focus on the impact of wearable technology on healthy users’ behaviou
Piwek, PLoS Med 2016; 13:e1001953
PLOS Medicine | DOI:10.1371/journal.pmed.1001953 February 2, 2016
Wandering e geofencing Un esperimento
Wandering prevalenza nella demenza (lifelong): 20% relativamente benigno, ma potenzialmente pericoloso non ci sono predittori di fuga/perdersi risposta: chiudere tutto, sorveglianza continua
T-1 Scr
Stress dei familiari (Scala RSS) JG Greene, R Smith, M Gardiner and GC Timbury. Age and Ageing 1982; 121-126
mai
di rado
a volte
spesso
sempre
# no
un po' (1)
moderatamente (2)
molto
(0)
moltissimo (4)
(3)
1. Le è mai capitato di pensare di non riuscire più a fronteggiare la situazione? 2. Le è mai capitato di pensare di avere bisogno di una pausa? 3. Le è mai capitato di sentirsi depresso/a dalla situazione? 4. La sua salute ne ha in qualche modo sofferto? # 5. E' preoccupato che possano succedere degli incidenti a ................................... (nome del paziente)? 6. Pensa mai che il problema non avrà vie d'uscita? 7. Ha difficoltà ad andare via per le vacanze? # 8. La sua vita sociale è cambiata? # 9. Sono state turbate le attività che lei svolgeva solitamente a casa? # 10. Il suo sonno viene interrotto da ..................................................? 11. Il suo tenore di vita si è abbassato? # 12. Si sente mai imbarazzato/a a causa di ...................................................?
Stress indotto nel caregiver 13. Si trova mai nell'assoluta impossibilità di ricevere visite? 14. Le accade mai di essere contrariato/a o di arrabbiarsi con ...................................................?
correlazione PB > stress del caregiver correlazione agitazione/perdersi > stress del caregiver Relative Stress Scale 15. Le capita mai di provare un senso di impotenza o inutilità nei confronti di ..................................? Commento:
MONITOR
O
M
R
A
27
Sigla del responsabile
v1
Il geofencing riduce lo stress? Studi limitati, essenzialmente “teorici” o di accettabilità Manca un vero RCT Problemi tecnologici Il dilemma etico della sorveglianza
Pianificare l’esperimento (1) Cambiamenti RSS RSS al basale
-40
-30
-20
-10
0 5 10 15 20 25 30 35 40
Media: 0.2; SD: 4.6 0
5 10 15 20 25 30 35 40 45 50 55
Media: 17.0; SD: 10.2 -40
-30
-20
-10
0 5 10 15 20 25 30 35 40
Media: 0.6; SD: 6.2
Pianificare l’esperimento (2) Studio clinico controllato randomizzato (in crossover) In cieco rispetto alle valutazioni (non è possibile per i pazienti/caregiver) Esito primario: differenza RSS e Zung-SAS (correzione per molteplicità di Hochberg) Calcolo della numerosità: 75 pazienti/caregiver Valutazione intention-to-treat
Big data Così tanti dati da richiedere nuovi paradigmi di lettura
Big data Definizione Su persone diverse: es: database amministrativi (di popolazione, anche se “stringati” e con affidabilità da valutare caso per caso) Sulla stessa persona: dati prodotti da sensori
*Blumental, JAMA 2015; 313:1424
Big data Nuovi utilizzi: 1. analisi finissima delle caratteristiche, con possibilità di scoperte di nuove relazioni 2. definizione personalizzata del paziente e studi con risultati precisi con meno soggetti
Nuovi utilizzi “non trasparenti” Rischi: furti (2010-2013, USA: milioni di record*)
*Blumental, JAMA 2015; 313:1424
Futuro presente sensori ingeribili malattie social e consenso informato digitale “epidermoelettronica” biosensori sottocutanei + pompe di farmaci connesse energia elettrica dal movimento del corpo Gibney, Nature 2015; 528:26 Mombers, Br J Clin Pharmacol 2015; 81:196
È ora di utilizzare questi strumenti? La velocità con cui la tecnologia mette a disposizione nuovi strumenti (sia di valutazione che di supporto) è estremamente alta, e almeno una parte di questi è potenzialmente molto utile La valutazione obiettiva di pregi e difetti di questi nuovi strumenti, possibilmente con RCT, è fondamentale prima di poterli utilizzare in maniera estensiva È importante avere dei riferimenti comuni a cui confrontare i nuovi strumenti tecnologici