Healthcare Informatics: Need for Academic Programs
In 2006, President Bush issued an Executive Order that called for action by the federal government to focus on the quality and cost of healthcare, as well as to drive the adoption of standards for health information technology (HIT) interoperability. This Executive Order created the Office of the National Coordinator for Health Information Technology (ONCHIT). In 2009, the American Recovery and Reinvestment Act (ARRA) included the Health Information Technology for Economic and Clinical Health (HITECH) Act. And in 2010, the Patient Protection and Affordable Care Act (healthcare reform, or ‘ObamaCare’) was signed into law. The HITECH Act established funding for ONCHIT, and the health care reform law (‘ObamaCare’) created a dramatic impetus for meaningful use of electronic medical record systems, e-prescribing, clinical decision support, and a host of related health information technologies.Significant innovation and accomplishments have occurred over the years, but problems remain. For example, in mid-2021, the Veterans Administration struggled to avoid abandoning a $16-billion project to implement a new 'records platform,' touted in 2017 by then-President Trump "as the solution to bring veteran and active-duty medical files into the same computer system for the first time." Sadly, well known principles in Healthcare Informatics could have averted 'implementation problems' (and the associated harms to Veterans and frustrations to VA providers). In short, the compelling need for academic programs in HIT continues.
Similarly, the burden of compliance mandates within the Health Insurance Portability and Accountability Act (HIPAA), has slowed adoption rates for electronic medical record systems. For example, HIPAA carries significant civil and criminal penalties for violations of its Privacy and Security Rules, which address the protection of patient records and data, in electronic and other media formats. Clearly, individuals charged with protecting personal health information must be educated not only on the legal implications of their actions, but also on appropriate information technology security measures.
In Nevada's academic institutions, it is interesting to note the parallels between (1) the development of health care informatics programs within the context of already structured degree programs and (2) the incorporation of interoperable HIT in existing health care systems and facilities. In both cases, the needs are self-evident and compelling, but institutional politics, status quo structures (turf), and workflow issues inhibit adoption. Nonetheless, such obstacles must be overcome, as educational programs in the field are vital.
Investments must be made not only in hardware and software to implement healthcare information technology, but also in training a workforce that is capable of implementing and using new systems, and innovating so that the newly available information can be applied toward saving lives, improving care, and reducing costs. Importantly, health care professionals have begun to recognize these needs, and to implement their own initiatives for defining new programs that incorporate health care information technology. And locally, the College of Southern Nevada (CSN) has participated in a workforce development program, offering nationally developed courses in related HIT areas.
Within the nursing profession, a special summit was convened on "Technology Information Guiding Education Reform (TIGER) in 2006." In this invitational meeting, 120 representatives from more than 40 nursing organizations were brought together to develop "a vision for the future of nursing that enables nurses to use informatics in practice and education to provide safe, quality care." Moreover, these representatives defined concrete steps to be taken to move the profession toward this vision. Currently, the TIGER collaborative continues its work in the midst of continuing health care reform debates.
Similar processes are occurring in the organizations of professionals in public health, and within medical and dental specialties. Indeed, the whole field of personal medicine based upon genomics is at the forefront of advanced healthcare, and is rooted in biomedical informatics. There is no question that health care informatics will become increasingly important and pervasive, but there are many questions regarding the particular directions that this growth will take in the different professions. However, virtually all professional organizations concerned with health-related disciplines have been infusing informatics competencies within academic programs in their fields.
Electronic Health Records System Development
In 2001, the US Department of Energy (DOE) awarded a cooperative agreement to UNLV with Dr. Stephen Rice as Principal Investigator, for research and development of an Electronic Records System (ERS) for worker safety and health. This thirty million dollar program was concluded in 2013, with migration of ERS to administrative sites determined by the government (OSTI Report 1123681). This program served as an exemplar of collaboration between government, academia and the private sector. Teaming with commercial partners and working under guidance from the Nevada Site Office (NSO), the project team addressed DOE needs through applied research and development tasks.
The ERS was built to contain over fifty years of historic records on Industrial Hygiene and Radiation dosage monitoring from the Nevada Test Site, and from other nuclear testing locations. The system was supported by user training, online help, a help-desk operation, and UNLV system administrators. Uninterruptible power, data backup, and both physical and cyber security subsystems were put in place and used to protect this asset and the privacy and security of the data. Authorized individuals from the NSO and researchers from the DOE Nuclear Testing Archive (NTA) in Las Vegas came to use the ERS on a daily basis.
The system was designed to allow access to, and knowledge discovery from, historic DOE worker health records. In its conception, the overarching objectives were (1) to provide an accessible, searchable, survivable electronic repository for the use, preservation and maintenance of authentic government records; (2) to perform interfacing and back-file conversion to centralize and manage the records; (3) to provide information discovery and analysis tools to extract content and provide decision-support; and (4) to demonstrate the efficiency and cost-effectiveness of university-industry collaboration in providing solutions to governmental agency needs.
The implementation effort followed a systems engineering approach for research, demonstration, and development, beginning with a study phase to determine the requirements baseline. This study phase included evaluation of the state of the practice, government and commercial standards, and analysis and process modeling at a representative sample of DOE sites across the United States, including Savannah River, Idaho, and Hanford. Once requirements were defined, there was a determination of subsets of tasks for research teams, based upon a gap analysis between what was needed and what technology was available.
A cornerstone of the program was optimizing advances in technology through targeted applied research and development for inclusion in the Electronic Records System (ERS). For example, the program team completed scanning and digitization of hundreds of boxes of DOE historical Industrial Hygiene records. This included scanning and imaging, indexing, uploading, and finally, validation of record completeness and accuracy. A Cyber Security Program Plan was developed, and certified by DOE. Training materials were completed for ERS use, and staff training was conducted. A parallel processing system was prepared, tested and implemented to identify, index and upload image data. A forms’ identifier and full text search capability, based on work from Professor Angelo Yfantis' team, were implemented to support automated indexing for nearly 1 TB of data.
Each electronic record was also "watermarked" to ensure its authenticity. The watermarking technology and other cyber security measures incorporated in the system were based on work from Professor Hal Berghel's team. Online help files were developed, a User Guide was prepared, and the ERS Help Line was established to provide user support via telephone and email. A Concept of Operations Plan for ERS was prepared along with an Operations Plan.
The Electronic Records System (ERS) was intended for deployment across the complex of all Department of Energy (DOE) and National Nuclear Security Administration (NNSA) sites. And while the ERS was designed for non-intrusive integration at any given work location, each individual deployment proved challenging, given the historical independence of operations, due to the varying missions of separate sites. The ultimate goal, however, was to provide full functionality for all DOE records systems, at all sites, in exchangeable electronic formats.
Health Information Technology (HIT) can be broadly described as the application of IT to any aspect of health care. Examples of systems already in widespread use include Computer Physician Order Entry (CPOE), Electronic Medical Records (EMRs), Electronic Health Records (EHRs), Personal Health Records (PHRs), Continuity of Care Records (CCRs), and various electronic prescription ordering, patient management, and medical billing systems. When implementing any large-scale IT system, there are two critical caveats that underpin success: (1) meaningfully involve all potential users in the design, and (2) strategically plan for changes in workflow associated with the implementation.
Even in cases where off-the-shelf IT systems are to be acquired (in which case, potentials users cannot be involved in their design), current on-site staff should be actively involved both in the selection and implementation of the new systems. Moreover, staff must be active participants in training and in workflow process reorganization. Almost without exception, implementing organizations should retain external consultants to facilitate and guide the processes of system selection and implementation.
Beyond IT systems such as described above, Health Information Technology (HIT) applies more broadly to the application of emerging technologies to health care management in a variety of settings. These include patient monitoring systems, telemedicine and telehealth, wireless and wearable products and solutions, extended care, wellness programs for home and office, nutritional monitoring and feedback, integration of care, and personal fitness. Many new products are based upon the use of sensor data that is communicated to the individual, to a care provider, or to a third party that responds to prescribed signal levels. Indeed, a mobile health (mHealth) revolution is occurring, as patients and providers increasingly are relying upon portable (including wearable or implanted) devices that both acquire and communicate health data.
Personalized remote sensing programs are being applied to address health management in cases of obesity, diabetes, congestive heart failure, hypertension, asthma and other conditions typically associated with aging and/or chronic conditions. And each year there is a new array of HIT-products that offer low cost, online, preventative monitoring of key vital signs and activity indicators from unobtrusive, wearable body-sensors that connect to wellness health management portals. In turn, these portals are used by growing numbers of individuals not only to track their own wellness but also to communicate data to healthcare providers. In sum, such technologies will continue to improve individual health, and therefore population health, and will thereby contribute to reduced health care costs. Health Information Technology is central to reducing costs while improving patient safety and care.
Tribology is a term intended to unify studies of friction, lubrication and wear within a single academic discipline.The Greek root is tribos, (to rub), as the subject of tribology usually involves materials in relative motion, with one body or substance sliding and/or impacting against another. In some technologically important cases, relative sliding of solid surfaces occurs with a very thin film of liquid lubricant maintaining separation of the two sliding parts. Such separation of surfaces is found in the operation of journal bearings, and allows rotating machinery, large and small, to operate at high speeds with virtually no wear of the moving parts. In contrast, ‘dry’ sliding contact occurs without an interposed film of lubricant, and relatively high friction and wear are usually observed. In some contacts, surfaces approach each other in the "normal" direction, as with a hammer impacting a nail, or a drop of water falling onto a stone. Such solid-solid contacts lead to impact wear, or, when a liquid repeatedly impacts a solid, erosion.
When full fluid films of lubricant separate surfaces, bearings can operate in hydrodynamic or hydrostatic modes. In such applications, elasto-hydrodynamic design principles are well understood, and engineering analysis is straightforward. However, when there is some solid-to-solid contact, the relative motion involves an elasto-plasto-hydrodynamic regime, and studies and analyses are more challenging. And, carrying this still further, there is a so-called boundary lubrication regime, and then unlubricated, or ‘dry’ sliding contact, where the science becomes empirical. This is because there is enormous complexity in the physics and chemistry of interacting sliding surfaces, and the design of meaningful, generalizable friction and wear tests itself is not yet achieved.
Surfaces themselves are complex, and usually comprised of multiple layers that differ compositionally and physically from the underlying substrate material. For example, a "pure" copper material, in air, forms an oxide layer, typically harder than the substrate. If the copper has been machined, the near-surface zones are plastically deformed, and differ from the base material. Aside from the chemical and physical material layers in this "simple" example, consider also the geometrical characteristics of the surface. Typically there are machining striations, with both longer and shorter-term waviness occurring over the surface. Superposed on these undulations, there are asperities of varying sizes and shapes. So, when two such surfaces are counter-posed, and then set in motion, at the microscopic level, the result can be thought of as equivalent to inverting the Austrian Alps and rubbing them against their Swiss counterparts! The result usually involves elastic and plastic deformation, transfer and back-transfer of material, and the creation of ‘near-surface’ zones that are compositionally and mechanically mixed. In short, wear processes give rise to complex material layers that are produced in situ, and that differ substantially from the original material.
Note that the above description says nothing about the nature of the sliding contact, including the presence of additional materials (lubricants, dirt, debris, or other atmospheric constituents), the nominal and local forces (and levels of stress) between contacting bodies, the relative sliding velocity (or velocities) between the surfaces, the nominal (and local, time-varying) temperatures in and around the contacting parts, the pattern of repetition of contacts (does one surface experience continuous sliding, while the opposed surface experiences cyclic contact, as in a pin-on-disc machine; or do both surfaces have nominally continuous contact, as in a disc-on-disc tester)?
The bottom line is that sliding contact involves time-varying chemo-mechanical material properties, geometrical parameters, complex contact conditions, operational atmospheres, temperatures, and transients, for each of the two contacting surfaces. Indeed, many of these properties and parameters change as the sliding contact continues. And, wear processes inherently lead to geometrical changes that occur in situ. This leads to changes in the stiffness of specimen and counter-face elements, and to the mechanical response (e.g., vibration) of the overall apparatus in which repetitive contact is occurring. Alas, there is no such thing as a simple wear test!
Mechanisms of Friction and Wear
Several fundamental mechanisms contribute to the friction and wear of materials. The most commonly discussed are adhesion, abrasion, and fatigue, but corrosion and electrical discharge are likewise fundamentally distinct processes. Adhesion is based upon chemical bonding occurring between sliding elements. Typically, such bonding occurs over microscopic areas, and if the bonding is stronger at the surface contact area than in some locally weaker zone within the substrate, as sliding continues there is a fracture within that substrate, and a wear particle can be generated.
Abrasion occurs when a harder material literally plows a groove into a softer element. In a single-pass event, this "two body" process is easy to conceptualize. However, in practice, abrasively formed particles frequently become trapped within a sliding contact, oxidize, and thereby form a harder exterior shell, and these ‘harder particles’ contribute to "three body" abrasion, as relative motion continues.
In repetitive contact situations, fatigue processes can occur. And frequently, adhesion, abrasion and fatigue processes occur in parallel. Depending upon the role of trapped debris, and a host of other variables as outlined above, processes such as fretting, or delamination, or surface fatigue can occur. Interestingly, corrosion can occur without sliding contact, but can be accelerated when such relative motion occurs. Such acceleration of material removal is understandable in the context of the numerous chemical and mechanical and thermal processes associated with sliding contact. Finally, electrical discharge can occur when two bodies at different potential are in proximity, sliding or not, in which case arcing can result in direct material transfer.
Friction and Wear: Experimental Approaches
Because tribological mechanisms often act in parallel, and because tribo-contact is inherently complex, the design of an appropriate friction and/or wear test is challenging. Indeed, much of the published data in handbooks where "friction coefficients" or "wear factors" are reported is generally meaningless, unless understood or applied in the narrow context in which the original tests were performed.
In fact, the only way in which meaningful engineering data can be obtained in a simulated environment is to design the test apparatus, conditions and environment to replicate, as closely as possible, the intended conditions and environment of use. For example, the design of a test apparatus for studies on the wear of dental restorative materials is very different from a machine to investigate implants for synovial joints. And testing materials at levels of nominal contact stress, and at operating temperatures expected in the application itself, is critical. The applied test load is frequently mistakenly thought to be important, when it is actually the (usually time-varying) nominal contact stress that must be considered, at both micro- and macro levels. Moreover, the stiffness of both apparatus and specimen are critical, and few investigators have even a superficial understanding of the importance of ‘stiffness effects.’
So, in a friction or wear test, which body is the specimen, and which body is the ‘counter-face’? Again, many ‘investigators’ performing tests do not fully understand that both materials play important roles in determining chemical and physical processes, as well as test outcomes. And, what kind of load cycling should the specimen experience? As noted above, a pin-on-disc apparatus provides a test where the surface of the specimen (the "pin") experiences continuous contact, while a given spot on the counter-face (disc) track "sees" the pin come around only once per disc rotation. Clearly, the pin gets relatively hot and, after a transient phase, assumes quasi-steady-state temperature profiles, while the disc material sees time-varying load (and stress) fluctuation, as well as thermal cycling.
Finally, what kind of geometry should be used for the test specimen? Many investigators use a spherically ended pin, but what happens in the test as wear occurs, and the pin develops a "flat" that grows over time? And, as this flat area grows, what happens to the nominal contact stress at the contact? The force per unit area changes, so the stress-cycling varies, so how does one correlate friction or wear test results with conditions to be expected in the design application itself? Friction and wear testing are complicated endeavors, and understanding or applying the resulting experimental data requires considerable experience.
Slip-and-Fall Accidents; Wear Related Failures
A practical design problem is the prevention of accidents in which a person is injured due to a "slip and fall" incident. In some cases, such accidents are due to bad luck, as with a banana peel carelessly tossed onto a walkway, awaiting a hapless pedestrian. But in many cases, such accidents are due to poor design, as with inappropriate surface texture or properties, or maintenance, or with failure of a component due to wear. Investigation of accidents and product failures often depends upon a very solid understanding of tribological systems and mechanisms, and of the multiple variables briefly outlined above.
The Role of the University Chief Research Officer
University-level administration can be fascinating. Some roles, such as that of Chief Financial Officer or Athletic Director are relatively easily ‘understood’ by the public, while other roles, such as those of the Provost or Chief Research Officer (CRO) are less clear, and vary widely between institutions. Within most research universities, the CRO is responsible for all policies pertaining to compliance with regulations, to issues with intellectual property and technology licensing, to the care and use of animals in research, to conflicts of interest between faculty and/or students, to participation of human subjects in clinical trials, and so on. In some institutions, the Research Officer also serves a function similar to that of Business Development in the private sector. That is, the CRO is active in seeking out and developing partnerships that enhance or build campus research strengths. So, all of this implies working with numerous organizations, foundations, as well as with governmental authorities and agencies at the local, state and federal levels.
In some cases, the Chief Research Officer serves as a lobbyist or Government Relations advocate, or oversees individuals that work in such capacities. This role involves development of relationships with elected officials and staff, and collaborative work where legislative actions contribute to university programs that benefit the public. On his or her campus, the Research Officer usually is charged with the development and administration of internal award programs that create opportunities for faculty and students, often with seed money to initiate new projects. It is important that such opportunities be made available fairly across the institution, so that, for example, individuals working in the arts and humanities have access to funds as readily as do those in the sciences and engineering.
Research officers at some campuses are associated with University-based Research Foundations, with specific roles and relationships varying widely from one institution to another. Such Foundations are established typically as 501[c]3 corporations, separate from the public university that they support, but functionally able to assist in technology transfer, the establishment of new companies based upon university research, and legally able to form for-profit ventures. The larger goals for such Foundations are to enable effective translation of academic research into businesses that exploit research findings to create new products, jobs and services that benefit the public. Similarly, Research Foundation proceeds, including licensing revenues, are used to defray some of the expenses associated with technologically sophisticated research equipment and personnel. The net result is that the institution is better able to leverage state resources, and to support new research undertakings.
Relatedly, Research Officers may oversee the establishment, development and operations of Research and Technology Parks. Such ‘Parks’ often involve real estate partnerships that allow the university to benefit from collaborations with industries that, in turn, utilize research findings to further develop their products. Research Triangle Park in North Carolina is among the better known of such entities, benefiting from the academic prowess of Duke, UNC and NC State, and serving as an incubator for new business development based upon research from those institutions.
Another primary role for the campus Research Officer is management of conflicts of interest that arise in research-related activities. Research itself is a process of discovery. Faculty members spend many years becoming highly expert, usually in deeply specialized fields of inquiry. The competition between faculty within a given field is intense, with the stakes sometimes being very high, and potentially involving academic awards, promotion, academic perquisites, better students, better pay, more funding for research, and opportunities for lucrative consulting contracts. And so, there are instances where ethical conflicts arise between faculty members, or between a faculty member and a student, or a department chair, and the institutional Research Officer must develop and administer policies and procedures that address such instances.
Even more complicated than policies covering individual conflict of interest is the area of institutional conflict of interest. This latter area involves virtually all of the individuals wielding power within and over the university, most notably senior university administrators, boards of trustees or regents, elected and corporate officials, and those who sit on university foundation boards, and the like. The associated issues have to do with many kinds of potential conflicts which are most easily suggested by example. Consider a University Regent who owns a construction business, and receives a contract to build a new Student Union building. Who monitors the awarding of such contracts, and is s/he likely to yield to subtle pressure from the President to allow the Regent's company's bid to bubble to the top? It is perhaps not the job of the campus Research Officer to establish policy in the area of institutional conflict of interest, but often the CRO becomes involved because s/he works more closely with individual conflicts, and is generally aware of the need for such policy.
Design is a creative process, beginning with an engineer’s idea or with a client's concept. The process usually is iterative and can be quite structured, depending upon the organizational context in which it is carried out. The lone "artist designer" can work in his/her studio environment and try one idea after another. In contrast, the engineer with Tesla or Boeing typically works within multiple design constraints, as well as within a tightly controlled schedule, and within many procedural requirements. So, given the diversity of contexts for design, how does one approach the teaching of such a process within a university setting and curricular constraints? It is not easy to give students experiences that mirror the wide variety of design environments in the real world, but it is essential to provide some exposure to the process!
In the context of engineering design education, there are three primary approaches. One, frequently applied with pre-college students, is enormously simplified, but provides an introduction to the basic process. For example, students might be provided a kit of materials and tools (balsa wood, glue, string, duct tape, etc.) and told to design a bridge, or a container for a raw egg to be dropped (without breaking) from the top of a building. Teams of students then ‘brainstorm’ to consider alternative concepts, choose what they consider most promising, and then ‘build’ and test a prototype. Such design experiences are fun for students, and do illustrate some of the elements within the design process.
A second approach is the paper design, and this teaching/learning process can be employed with students at any stage in the curriculum. In this approach, a design problem is presented, and students are expected to develop a solution, expressed in drawings and specifications, and usually also documented in a written report. This technique can be very valuable for exposing students to elements of the design process, and to illustrate principles, within the context of the problem as presented.
The third approach is as close to the real world as possible, and is based upon actual design problems. An example is a program developed at the University of Connecticut in which real design needs were solicited from local industry. Students were organized into two-person teams and assigned a given problem, and provided with information by their contact from the sponsoring company. This industrial contact gave background on the design needs, and the students generated alternative concepts for review and approval both by the company representative and the instructor. Following selection of the ‘best’ design approach, students performed detail design and worked with various real-world constraints to get parts produced and their assembly completed. The students then exhibited their fully functional prototype at a special event where all projects were showcased for family and sponsoring organizations. The process was completed as a senior capstone project in a single 15 week semester! UNLV’s Senior Design program is similar, but takes two semesters, and is perhaps more humane.
Innovation requires "thinking outside the box." It requires interpreting needs in their most fundamental form. For example, if you are told to design a better lawn mower, what do you consider? Most individuals immediately will focus upon current designs for lawn mowers, and then think about improvements to such devices that might be made. And certainly, much design is iterative and incremental, and products do improve from year to year. But, if one limits thinking by starting from existing products, totally new designs will not be conceived. Think again about the lawn mower, and ask yourself what really is the design requirement? Is it not to cut grass or weeds or whatever needs trimming? And presto, by thinking of the basic design requirement, and not being limited by existing solutions, one conceives the Weed Whacker; a polymeric string that whips around and is enormously effective in cutting.
Innovation in science is similar, although often there is a serendipitous event or an apparently routine finding that spawns a question about what would happen if things were changed in a particular way. But, the untrained individual would not understand the question in the same ways as would the highly educated and experienced scientist, for the latter’s reasoning and insight are based upon an enormous knowledge base. Invention, as suggested by Louis Pasteur, favors the trained mind.
The Process of Technology Transfer
In a university environment, technological innovation must be brought to the attention of those in the business community who can develop products and bring them to the market. Typically, the skills and facilities and know-how to accomplish such commercialization are not held by the scientists and engineers doing cutting edge research in the university. Instead, faculty members are focusing their attention on continually developing their advanced knowledge and expertise within their academic disciplines, overseeing their laboratories, mentoring their students, and so on. So, there is a need to facilitate the transfer of knowledge coming from university faculty/student teams and passing it along (or “translating” it) to develop commercial products and create jobs. Such ‘translation’ is the responsibility of university offices of technology transfer.
At the University of Nevada, Las Vegas, development of the Office of Technology Transfer involved several steps. One was the creation of a university patent policy, and associated processes for evaluating potential returns on investment of funds used for filing and maintaining individual patents. Indeed, for each patentable invention, there are questions of breadth of coverage (international? which countries?) and funding for the filing and maintenance of patents per se. There are also questions of revenue sharing, with interested parties including not only the inventors themselves, but also department chairs and deans, and other administrative officers. Moreover, there is the division of responsibility between university offices of research and legal counsel, not to mention the Office of Technology Transfer itself, and a potentially interested University Research Foundation and/or those concerned with managing the institution’s Research and Technology Park! Fortunately, the Association of University Technology Managers (AUTM) gathers experience from institutions across the country, and provides data and information that are useful in framing campus dialogue.
In the end, any Office of Technology Transfer (OTT) is only as effective as its staff. And these staff members must possess broad and deep knowledge of technology itself, and therefore the Office needs either very special individuals or a rather large staff with varying expertise among members. And, beyond the technology expertise, one must have experience that extends from the academic inventor to the product developer, with knowledge of commercialization, entrepreneurship, financing, business development, and so on. In addition, there must be expertise in licensing, and very special skills in communication and negotiation.