Most Electronic Patient Record (EPR) systems currently run only on stationary computers, while empirical studies of clinical work in hospitals show that health workers are constantly on the move in a highly event-driven working environment . Clinical work is information and communication intensive and highly mobile . EPR content is currently to a large extent produced and utilized in point-of-care settings away from the computers through the use of paper printouts, hand-
written notes, and voice memos; while actual interaction with the EPR is done while sitting down at a stationary computer.
This creates an obvious potential for mobile computing in healthcare. To best support health workers in their everyday work, the hospital’s EPR system should allow for interaction with the patient’s medical information at the point of care. A number of studies of existing systems have documented the benefits of mobile computing in health care [3,4], and other studies indicate additional benefits from the use of context information such as the health worker’s location and electronic patient identification [5–7].
Moving the user interfaces of EPR systems on to mobile devices creates new challenges for system design and usability evaluation. Since its infancy at Xerox Parc in the late 1970s , usability testing of information systems has matured to an established practice in the software industry, with an ISOdefined common industry format for reporting test results . Up until recently, most software products being tested were desktop based, i.e. single-user software running on a desktop computer with input through a keyboard and a mouse. This situation is now changing as more software is being produced for mobile devices such as mobile phones and PDAs. This creates newmethodological and technological challenges. From a usability perspective, the main difference between desktop-based and mobile computing is related to the use situation. The prototypical use situation for desktop-based applications is one-user sitting on a chair in front of a table looking at a screen with his or her hands on the keyboard and the mouse.Mobile technology, on the other hand, is to amuch larger degree embedded into the user’s web of physical and social life. Dourish  uses the concept of embodied interaction when referring to this. Embodied interaction, as argued by Dourish, is characterized by presence and participation in the world. As such, interaction with mobile technology is not a foreground activity to the same extent as interaction with desktop-based systems, but switches between being at the foreground of the user’s attention and residing silently in the background.
The hospital as a work environment makes usability evaluations even harder, as compared to for example everyday use of mobile phones. Mobile ICT in healthcare is often integrated with a number of other ICT systems, serves a number of different user groups, and must allow for use in a number of different physical environments. Usability testing of mobile technology in healthcare consequently requires new ways of designing and doing the tests, new ways of recording user and system behavior, and new ways of analyzing the test data. In the present paper we will address some of the methodological and practical challenges related to usability testing of mobile ICT for healthcare. This will be done by summing up our experience from two usability evaluation projects of mobile EPR done in a full-scale model of a hospital ward. We have posed two research questions. (1) What classes of usability problems should a usability test of mobile ICT for clinical settings be able to identify? (2) What are the consequences concerning test methodology, lab setup and recording equipment?We will answer the first question by analyzing the usability issues that emerged in the two projects. The next question will be answered by analyzing what aspects of our existing test methodology, lab setup and recording equipment that contributed to the identification of these usability issues. Based on this,we will give some general recommendations for usability testing of mobile ICT for clinical settings. We are aware of the limitations given by the low number of projects, and will discuss the threats to validity that this poses.
2.1. Mobile technology defined
There is at present no consensus on a definition of mobile technology. In , Weilenmann does a review of the literature on mobile usability and ends with a fairly open definition of mobile technology: “. . .a technology which is designed to be mobile” (p. 24). For the purpose of the present analysis we prefer a more precise definition.We define mobile technology as technology that provides digital information and communication services to users on the move either through devices that are portable per se, or through fixed devices that are easily ready at hand at the users’ current physical position. Concerning computer devices, the above definition includes Tablet PCs, PDAs and mobile phones, but also opens up for ubiquitous and pervasive technologies, multi-user, and multi-device systems. It excludes the desktop computer, defined as a one-user-at-a-time stationary computer with display, keyboard and mouse.
2.2. Usability defined
Up until the late 1990s there was no well-established definition of usability. A long discussion in the field has led to an ISO definition of usability. ISO 9241-11  defines usability as the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. An important property of usability as defined by ISO is that it is relative to the users, their goals and the physical and social context of use. This makes the definition of usability context-dependant and different from context-free definitions such as that of the meter, which is the same for every user, every goal and every physical and social environment. By defining usability relative to users, goals, and environment, it becomes meaningless to talk about usability as a property of a product as such. Amodern “smartphone” can have a high usability for an adult user who wants to use it for amultitude of tasks. Due to the necessary complexity at the user interface, the same mobile phone might have a very lowusability for her child who simplywants to call her mother.
2.3. Usability evaluation of mobile technology
The physical shape of the PC has converged into two dominant forms, the desktop computer and the laptop. This de facto standardization makes it possible to develop software for PCs without having to care about hardware issues. For mobile devices the situation is far more complex. We find a multitude of form factors, screen sizes, interaction technologies, and button configurations. Mobile devices range fromone-button controllers for garage doors to “smartphones” with full QWERTY keyboards. They take input through different combinations of buttons, touch screens, navigation wheels, voice recognition, and pen input. Some devices have no screens, some have very small screens, some implication is that every evaluation of a mobile application or service will at the same time be an evaluation of the device(s) on which it runs. Since Weiser coined the term “ubiquitous computing” in the early 1990s , there have been a number of usability evaluations of non-desktop systems, both under controlled laboratory conditions (e.g. ) and through field trials .
A number of studies have compared stationary usability testing and field testing for mobile technology (e.g. [17,18]). The usability tests took place in “traditional” usability laboratories, and consisted of testing the mobile application in a stationaries use setting. The field trials involved following the users in their natural setting. The studies concluded that both evaluation methods have their specific pros and cons, and that they complement each other. Usability tests are better at identifying details of the interaction, while it lacks in realism. Field trials are better at identifying contextual matters, but it is often difficult to get feedback on specific user interface issues.
3.1. A usability laboratory for mobile ICT in medical settings
As part of a national research initiative on health informatics in Norway (NSEP),we got funding to build a usability laboratory for evaluation of mobile applications in the health domain. Being aware of the drawbacks of traditional desktop-based usability tests for mobile technology, we started out by conducting a comparative usability evaluation to verify the results of Kjeldskov et al. . The study  verified their results and motivated the construction of a laboratory that allows for a large degree of realism. The health domain differs from many other domains in that field trials are very difficult. This is due to medical, ethical and practical reasons. This gave an additional motivation for building a usability laboratory, and not relying on field tests.
To compensate for the lack of realism in traditional usability tests, we have built a laboratory with movable walls in a 10m×8m room that allows for full-scale simulations of different hospital settings. Our hope is that this approach will give us the best of desktop usability tests and field trials. The laboratory has been used for testing of mobile and ubiquitous computing , and for doing drama-based participatory design . In Fig. 1 we see a typical setup of the laboratory where the movable walls and doors are configured to mimic a section of award in an average Norwegian hospital. The rooms are equipped with patient beds, chairs and tables to create a
high level of realism.We have consulted healthworkers in this process.
For recording of user data we use a fully digital Noldus video-recording solution with our own adjustments and extensions. We currently have three roof-mounted remote control cameras, a number of stationary cameras, wireless “spy” cameras, wireless microphones, an audio mixer, and software solutions for doing remote “mirroring” of the content on the mobile devices. The recording equipment allows us to integrate a number of video and screen capture streams into a high-definition video digital recording. At the most we have integrated in real-time three video streams and live screen capture from seven mobile devices; together with audio from four microphones.
4. The two experiments
We will report here from two usability evaluations done in the usability laboratory by the authors. Both evaluations were controlled experiments exploring the potential for mobile and ubiquitous computing in the hospital. The aim of the two studies was to compare specific technological solutions. The results from the comparison tests have been reported elsewhere , while the consequences for test methodology were not discussed. We will here summarize the lessons learned from the two experiments concerning usability evaluation methodology.
4.1. Experiment 1: combining handheld devices and patient terminals
A number of new hospitals now install bedside terminals for the patients. Such terminals are currently to a large extent used for entertainment and web browsing. The patient terminal is basically a PC where all input and output is done through a touch screen. The patient terminal is mounted on amovable arm (see Fig. 2), so that it can bemoved according to the patient or staff’s preferences. In cooperation with one of the vendors of these terminals, we explored the potential for letting physicians use handheld devices (PDAs) as input device for the bedside terminals. Seven addition to a baseline solution where all interaction was done directly on the patient terminal touch screen. The eight alternative designs were tested on a scenario where a physician uses a bedside terminal to show X-ray images to a patient.
Fig. 3 shows two of the prototypes. On the solution to the left, the physician selects an X-ray image by dragging it to a terminal icon on the PDA. On the solution to the right, the physician uses the PDA as a remote control to navigate in amenu on the bedside terminal.
Due to patient safety and privacy issues, we were not allowed to test the prototypes in situ. The usability tests were done in our usability laboratory with a replication of a patient roomwith a hospital bed, a touch screen bedside terminal, and a PDA. Due to the nature of the scenario, the tests were done with pairs of users, one physician and one patient. A total of five pairs were recruited.
Fig. 4 shows the recorded video from a usability test of a third design alternative. The integrated video has two video streams to the left and a mirror image of the PDA to the right. After having tried out all versions, the physicians and patients were asked to rank the different solutions by sorting cards representing the alternatives. They were asked to give reasons for their ranking.
The ranking session for each alternative was recorded, and the post-test interviewswere transcribed. The interviewswere then analyzed in search of recurring patterns. The comments made in the tests and during the card rankings gave insight into the factors that were perceived as influencing the usability. All factors listed below were found for all pairs of testers.
4.1.1. The graphical user interface
The usability of the graphical user interfaces (GUI) on the two devices had an important impact on the overall usability. When the users were unable to comprehend the user interfaces, or when they were awkward to use, the corresponding design alternatives got a low ranking. The usability of the graphical user interface is here defined as what is normally evaluated with a stationary usability test
on a desktop computer. It includes the visual design, the ease of use of the interactive screen elements, and factors such as affordance, constraints, visibility, feedback, and interface metaphors. The simplicity of the GUI was explicitly appreciated by many of the users.
4.1.2. Screen size and ergonomics of the patient terminal
All participants reported that the screen of the patient terminal was large enough to show X-ray images, while the screen of the PDA was too small for this purpose. Having the patient terminal positioned by the bed within arm’s reach from the patient made the X-ray images easy to see for both physician and patient. The terminal was easy to operate for the patients through touch, while some physicians were uncomfortable with the solution, as they had to bend over the patient’s bed to reach it. Some physicians commented that a good thing about the PDA-based design alternatives versus the baseline alternative (no PDA) was that they no longer had to bend over the patient’s bed to operate the terminal. This influenced their ranking of the alternatives in favor of the PDA-based
4.1.3. Shared view versus hiding information on the PDA
One recurring issue during the interviews was whether the selection list should be on the patient terminal or only on the PDA. Four of the design alternatives had the list of X-ray images present on the patient terminal all the time, while the remaining four had the list only on the PDA. Most physicians thought at first that there was no point in hiding the list for the patient, while some meant that the list could distract the patient. Some were afraid that the patients would interpret information on the list without having the skills to do so. Most of the patients initially wanted the list to be present on the screen. They wanted to see an overview of the images and felt that the physicianwas keeping secrets for them when the list was not present. Two of the patients changed their mind during the tests, and felt that the list took too much attention. They felt that it was easier to focus on the X-ray images and the physician when the list was not present. One patient felt that he had enough confidence in the physician that it did not matter whether the list was present or not.
Fig. 4 – The physician uses her PDA to select an X-ray image to show to the patient.
The evaluation was inconclusive as to whether the physicians should be “allowed” to have “secret” information on the PDA. The answer to this question is not relevant here, what is important are the arguments used in the preference ranking. The arguments for allowing some of the information to reside only on the PDA were related to optimal use of the screen for showing X-rays, and hiding of unnecessary information. The arguments for sharing all information on the patient terminal were related to trust and overview.
4.1.4. Focus shifts and time away from the patient
Almost all physicians commented that the PDA became an extra device to focus on. One of the physicians reported: “I get two places to see, and I experience that I speak less to the patient. I have to share my focus between there [patient terminal], there [PDA], and the patient. It’s quite demanding, and I have to share my focus between three different levels”. The results from the usability test showed that the change of focus between the PDA and the patient terminal was quite demanding for most of the physicians, and it became a disturbing element in the communication with the patient.
The arguments made by the test subjects during the preference ranking indicate that design alternatives requiringmany focus changes between PDA and patient terminal were rated lower than less demanding design alternatives. When the physicians and the patients looked at or used the same screen, they felt that they were communicating on the same “level”. When the physicians started using the PDA, some of them felt that it became a disturbing element in the conversation and that they now were communicating on different “levels”.
4.2. Experiment 2: automatic identification of patients at point of care
The aim of this evaluation was to assess and compare the usability of different sensor-based techniques for automatic patient identification during administration of medicine in a ward. Lisby et al.  analyzed the frequency and cause of medication errors in a Danish hospital. They found that 41% of the errors were related to administration. Of these, 90% were caused by wrong identification of patients. Currently, fewhospitals have computer systems supporting the administration of medicine at the point of care. A recent study of the use of technology in drug administration in hospitals shows that only 9.4% of US hospitals have IT systems that allowthe nurses to verify the identity of the patient and check doses at the point of care .
During drug administration, a health worker (typically a nurse) distributes prescribed medicine to ward patients. The nurse also signs off on the respective patients’ medication chart that the medicine has been administered and taken. For simplicity, the chosen test setup involved only two patients. Moreover, it was assumed that the patients were located in their respective beds throughout the whole scenario. For simplicity, it was also assumed that the correct medicine dosage for the respective patients was carried in the health worker’s pockets.
Fig. 5 shows a health worker in front of the first of the two patient beds.
The problem being addressed in the developed prototypes was that of identifying the correct patient at the point of care. A typical solution for patient lookup on a PDA or bedside terminal would be name search or selection from a patient list. These are activities that take time, and where the potential for error is large. By adding new ubiquitous-computing technology to the mobile EPR, such as token readers or location sensing, there is a potential for automating patient identification. Four different design solutions to the problem of automatic patient identification were compared. The four alternatives were the 2×2 possible combinations of two sensing technologies
and two device technologies. The two sensor technologies were barcodes (token-based) and WLAN positioning (locationbased).
The WLAN positioning system used consisted of directional antennas in the ceiling that continuously detected the physical position of all WLAN devices in the room to an wireless PDAs (mobile) and bedside touch-screen terminals (stationary). An implicit assumption in the prototype implementations was that the computing devices could retrieve medication charts from an EPR system.
The user interface for the medication chartwasmade extremely simple, as the focus of the study was not on medication charts, but on automation identification of patients. A total of eight Norwegian health workers (seven nurses and one physician) were recruited from a local hospital. We had two persons with experience from health care simulate the two patients. The test participants were also encouraged to interact with the persons simulating patients just as they would do in an everyday work situation. As in Experiment 1, the test subjects were asked to rank the four alternatives while explaining their rankings. The transcripts from the ranking sessions were analyzed in search of factors that influenced their ranking. These are summarized below.
4.2.1. Time on computer devices versus time on patient
Many test participants expressed a general concern that cumbersome information navigation would require them to pay too much attention to the computer devices, rather than attending the patient. They consequently all saw the benefit of automatic patient identification. The two location-based interaction techniques got a high ranking. These design alternatives took advantage of the
user’s natural mobility in the physical environment. The fact that these techniques allowed patient identification to occur in the background of the user’s attention can be viewed as an important reason for their high rating. According to one test subject, retrieving medication information based on a caregiver’s physical location “gives meaning simply because you necessarily have to be with the patient when administering his medicine.”
In order to retrieve patient information via tokens (i.e. barcodes), the users had to explicitly scan them. The test participants who preferred location-based interaction to token-based interaction argued that barcode scanning took attention away from the patient and the care situation.
4.2.2. Predictability and control
Earlier work on context-aware/ubiquitous computing has pointed out that autonomous/automatic computer behavior often comes at the cost of user control [26,27]. The conducted usability tests revealed similar tendencies. Users that preferred token-based interaction to location-based interaction found that getting computer response as a result of an explicit and deliberate action gave them a feeling of greater control over the application. According to some test participants, the feeling of control over the application made the computer system seem (quote) “safer” to use. In other words, it made the users more certain that they were signing off on the correct patient medication chart.
We found that the potential lack of control some users experienced when testing the location-based solutions was related to the fact that the zones in the room were invisible. The system “magically” knew when the physician was near a patient. Despite the lack of control that many users experienced with the location-based solution, many were willing to give up control as long as it made patient identification easier.
4.2.3. Integration with work situation
Most test subjects commented that when administering medicine in their everyday work, they were accustomed to informing the patient verbally what medicine he or she was given. Many of the test participants therefore saw an additional benefit of having the opportunity to visually show medical information to the patient via the shared screen of the bedside terminal. Accomplishing this via the small screen on the PDA was experienced as being far more cumbersome. The PDA, however,was not found more unsuited for accessing and signing off on electronic medication charts, per se. Nevertheless, the perceived positive effect of having a shared computer screen left the majority of participants with the impression of getting the job done in a more satisfactory way with fixed bedside terminals.
Several test participants pointed out that another benefit of using stationary patient terminals versus a portable device was that it allowed them to have both hands free. This was seen as important as they often perform tasks at point of care that require both hands free (e.g. hand over medicine, help patients in and out of their beds). Based on this, the majority of the test group found the fixed bedside terminals to be more seamlessly integrated with the overall work situation, while the PDA imposed more of a physical constraint. One of the potential drawbacks of the implementation pants, was related to privacy.When using a shared screen it is also possible for others (e.g. patients and visitors) in the room to see the information.
5. Factors that affect the usability of mobile EPR
A number of factors that affected the overall usability were identified in the two experiments.We have grouped them into three large classes: GUI usability, physical and bodily aspects of usability, and social aspects of usability.
5.1. Usability of the graphical user interface
In the two experiments, relatively few usability issues were caused by bad GUI usability. This is probably due the simplicity of the prototypes. The simplicity of the GUI in the prototypes was appreciated by the users, but in more realistic mobile EPR system the user interfaces will be more complex and more of the usability problems will probably be due to problems in the user interfaces.
5.2. Physical and bodily aspects of usability
One could argue that usability problems caused by the GUI have their roots in a mismatch between the graphical user interface and human cognition. In a similar fashion, one could argue that there is a class of usability problems that have their roots in a mismatch between the physical aspects of the systems and the human physiology. The latter are often referred to as ergonomic problems, but for mobile ICT it also includes issues such as the accuracy of sensing technology. In the two experiments there were a number of physical and bodily issues.
Both experiments had issues related to screen size. In the first experiment, the PDAs were found to be ill suited for showing X-ray images, while in the second experiment large screens were preferred for showing medication lists to patients.
Both experiments also had issues related to body movement and the use of hands. In the first experiment some physicians commented that a good thing about having a PDA was that they no longer had to bend over the patient’s bed to operate the terminal. In the second experiment, some users preferred a bedside terminal because it allowed them to have both hands free for other purposes. The most important aspect of mobile ICT is that it supports human mobility by allowing for computer access “any time, anywhere”. The simplest way to achieve this is by letting the user carry the devices with them. In the second experiment, some of the users preferred PDAs because it allowed them access while on themove. In Experiment 1 therewas a need for large screens to show X-ray images, and it was not possible to combine this with mobility. In that case, support for mobility had to be weighted against other system requirements.
5.3. Social aspects of usability
Mobile technology is with the user in his/her “life world”, which in most cases is a social world. Human life is to a large degree life with other humans, and mobile use therefore often happens in contexts with other people present. This is to a large degree the case for work in healthcare. Mobile devices and services are often used to communicate with other people or to coordinate shared activities, but they also play a role in the social interaction with other people. In the two experiments we found a number of usability issues that were related to social aspects of the use situation. In both experiments therewere issues of shared versus private view of displays. These issues were caused by the social aspects of the clinical setting. There are certain parts of a physician’s display that should be “off limits” to patients, such as medical data about other patients. However, in some situations in the experiments it was required that patients and
physicians should have a shared view.
In both experiments it was found that the system’s effect on the physician–patient face-to-face dialogue became an important usability issue. In this case, the usability of the system was affected by how the human–computer interaction matched the timing of the human–human interaction. If the human–computer interaction took too long and required too much mental effort, it reduced the quality of the human–human interaction, and as a consequence became a usability problem with the system. Good and bad overall usability in these cases were not only due to GUI design and ergonomics, but to what degree the system matched the requirements created by the social aspects of the situation.
5.4. Specifics of each use situation
For all three aspects of usability; GUI, ergonomic and social; it is not the match with the users as such that matters, but the match with the use situation. In Experiment 2, it was important for the physician to have both hands free; while in Experiment 1 this was not important, even if the PDAs were the same. The difference in usability was not due to the ergonomics of the devices as such, but due to the different tasks and use situations in the two experiments. The contextual nature of usability should not come as a surprise as the ISO standard  defines usability in relation to the specifics of each context of use: “. . .with which specified users achieve specified goals in particular environments”.
6. Consequences for usability testing of mobile EPR
Based on the identified factors that affect the usability of mobile EPR, we will present a set of recommendations mendations come in addition to accepted best practice for usability testing and reporting as defined in the ISO/CIF document . For all usability testing it is important to identify the right user group(s), make tasks that are realistic, and create a physical and social test environment that mimics that of the intended use situation. In addition, test scenarios and tasks must be built on studies of work practice, and their realism must be verified by the test subjects . Usability testing of mobile EPR adds some additional challenges.
6.1. Usability of the graphical user interface
The GUI is a common source of usability problems in all ICT systems. Most mobile ICT systems for clinical use will have one or more screens with a graphical user interface. The device screens might be smaller than that of a typical PC, but we will still be faced with GUI usability issues very similar to those of desktop computing. When the mobile-EPR GUI is complex, we recommend doing a separate desktop usability test of the system prior to a full-scale usability test. By testing the GUI separately, it is possible to cover more system functionality in one test and to get feedback on GUI details such as menu structure, navigation, wording, information architecture, screen layout, and font size. It is possible to use the same test subjects both for GUI test and full-scale test, butwe recommend using different test subjects, as prior exposure to the product will reduce the validity of the test results.
A full-scale usability test of mobile EPR will also implicitly test the GUI. Much can be learned from studying the user’s interaction with the GUI in a full-scale test. A desktop usability test should not be seen as a substitute for recording and analyzing the GUI interaction in full-scale tests. Some aspects of GUI usability will only appear when the tasks andwork environment are realistic, and it is necessary to study the details of the GUI interaction to identify these issues. To be able to identify GUI-related usability issues, it is necessary to record for later analysis the screen content of the devices and the user’s interaction. For mobile technology it is not possible to use a video scan converter, as handheld devices have no video-out features.We have used three different techniques for recording GUI content and interaction on mobile devices.
(1) Some operating systems (e.g. Microsoft Windows Mobile, Symbian) allowfor “mirroring” to a PC overWLAN through third-party software. This has allowed us to get digital video recordings with the screen content integrated with video from the lab cameras. The recording in Fig. 4 from Experiment 1 is an example. It is a real-time mix of two video sources and a “mirror” of the PDA content.
(2) In some cases the handheld devices or their operating systems will not allow for “mirroring”. For those cases we have made use of a homemade docking device with a miniature wireless camera. Fig. 6 shows the device to the left and an example from a resulting recording to the right.
(3) For larger devices it might be necessary to allocate a video camera to get the details of the user’s interaction. The top left part of the recording in Fig. 4 is from a roof-mounted camera that was fixed on the bedside patient terminal. In this case, the camera also captured the screen content, and eliminated the need for softwaremirroring of that display. When mirroring handheld devices one loses the details of the finger interaction. If possible, a roof-mounted camera should be used for following the user, and capture the details of the interaction with the device.
6.2. Physical and bodily aspects of usability
From the conducted experiments we learned that replicating the physical environment of real hospital settings is essential for producing valid results. For example, using human actors to represent patients (as apposed to more abstract representations or “imaginary” patients) and placing them in actual hospital beds, is crucial in order to simulate howmobile technology accommodates point-of-care situations and the interaction between clinicians and patients. We also found that mimicking the physical configuration of an actual clinical environment can be used to guide the test subjects through a scenario. For example, by using two different rooms (award corridor and a patient room) and two patient actors in Experiment 2, physical movement between various locations and patients became a natural part of the scenario. This was essential for understanding the extent to which the precision of the positions sensors met the requirements of the users.
6.3. Social aspects of usability
The findings from the two experiments point to the importance of getting the social aspects of the use situation right. Usability issues, such as the effects on the quality of faceto- face communication, cannot be measured unless usability tests include multiple users simultaneously. We recommend that the use scenarios for mobile EPR include enough user roles to be able to capture the social context of the use situation. This will differ from system to system. In some cases one might only need a physician and a patient, while in other cases we need to do tests with teams of health workers. It is important to make sure that the communication between the users is captures for later analysis, both the verbal and the non-verbal. Good sound quality is essential for capturing the verbal communication.We recommend one miniature wireless microphone for each test subject. An audio mixer is necessary, as most recording software only allows for stereo sound input. To capture the non-verbal communication, it is important to make sure that there are enough video cameras to be able to follow the test subjects around during the usability test. This is very similar to the requirement concerning video capture for device ergonomics.
6.4. The need for flexibility
The hospital is a very heterogeneous place concerning physical work environments. Looking beyond the requirements for each usability test, there is a need to make a usability laboratory for mobile EPR flexible enough to be able to simulate a number of different physical environments. These environments will differ in floor plan, furniture and artifacts. In our laboratory, we have installed movable walls that allow for easy reconfiguration. We have found this approach very useful as it saves us time setting up the physical environment for new usability tests. Based on our experience, we recommend that a usability laboratory for mobile EPR is constructed to allow for easy reconfiguration of floor plan, furniture and artifacts.
The analysis and recommendations in this study are based on a limited number of tests with a limited number of test subjects. In addition, the experiments were done with very simple prototypes in simplified use scenarios. The experiments have allowed us to identify some usability issues for mobile EPR, but our findings should not be seen as an attempt at making a complete list of such issues. More studies of mobile EPR are necessary to get a more complete picture of the usability challenges for this class of systems.
We have concluded that the overall usability of mobile EPR is caused by far more than the graphical user interface.We are confident that this will apply also to other mobile ICT systems for clinical settings.We consequently believe that our general recommendations, to simulate and record the physical and social aspects of mobile ICT for clinical settings, will be valid for future evaluations.
Clinical work in hospitals is information and communication intensive and highlymobile. Healthworkers are constantly on the move in a highly event-driven working environment. Most current Electronic Patient Record (EPR) systems only allow for access on stationary computers, while future systems also will allow for access on mobile devices at the point of care. While much is known about how to do usability testing of stationary EPR systems, less is known about how to do usability testing of mobile EPR solutions for use at the point of care. In two lab-based usability evaluations, we found that the usability of the mobile EPR systems to a large extent were determined by factors that went beyond that of the graphical.
nursing assignment, nursing assignment writers, nursing assignment writing service, nursing homework help, nursing homework assignments, do my nursing assignment, help with writing nursing assignments, online nursing assignment help, nursing research paper writing service, nursing assignment help gumtree, nursing assignment writing help