The assignment is to choose only two of my classmate’s post and reply to their post. One paragraph or two is enough for each of my classmates. Because it is like a discussion post try to be informal and use words like ( I like your points on…, I found your post really interesting, you have a good point…, when I read your post I…, I believe that ….) something like that (using the “I” word)
For the Response Posts you will be graded as follows:
|Criteria||You did not complete any response posts, or your response Posts did not contribute to the discussion question.||You only completed one response post and/or your responses did not contribute to the discussion in a significant way.||Your responses advanced the conversation in a meaningful way, and provided a helpful and unique perspective on the discussion topic.|
The question was:
What is AI? How has AI been applied in the mental health care system (provide examples from Luxton, 2013)? What are the benefits of AI in the health care system, and what are some of the concerns? What is the goal of Natural Language Processing (NLP)? Describe each of the main distinct focuses of NLP. Select two of the levels of NLP and describe each, as well as provide an example. Describe some of the similarities and differences between the Statistical Approach and the Connectionist Approach. How did Fei Fei Li and colleagues incorporate NLP into their object-naming system described in the TED Talk? How can the technology described in the TED Talk be applied to real-world applications?
A link to the TED Talk video by Fei-Fei Li:
Your answer was:
Artificial intelligence is a technology constructed to accomplish tasks that ordinarily require human intelligence. Artificial intelligence can be applied in the mental care system in the following ways; AI has enabled the use of virtual reality simulated humans which assist in carrying out collective and intelligent conversations, these simulated humans are used in learning and training in issues concerning mental care. AI has also enabled virtual human avatars which are capable of being used in individual to individual interactions. These interactions are applied in psychological treatments, assessment and also when carrying tests. AI also enables virtual consultation among patients thus providing affordable and convenient health care. Through the use of avatars in mental health care, privacy is enhanced since the clients who are concerned about their privacy are can seeks care on the virtue care providers. Through the use of AI, the use of computerized health screening systems has the capability of serving a large number of people. This system also has high efficiency and reduces uncertainties in mental health treatments. Additionally, the AI also helps in providing simulated practitioners who have super capabilities that are higher than normal human being capabilities. Further, AI has enabled the use of expert systems that are used for clinical diagnostics and decision making. The goal of natural language processing is to interpret and reproduce naturally occurring texts at one or more levels of linguistic analysis to get human-like language processing for several usages. The main focuses of NLP include, language processing which entails analysis of the language to present a logic representation. Language generation involves getting information from the representation. There are several levels of NPL which include; phonology level, this involves the definition of speech sounds inside and across words. This level involves three rules; phonetic rules used in sounds in words, phonemic rules which involves alternation of pronunciation when words are spoken concurrently and prosodic rules which involves alterations in stress and intonation across the sentence. For example for the system to accept spoken input the sound waves are encoded to digital inputs. The second level is morphology, this level deals with the elemental nature of words. It involves morphemes which are the smallest parts of meaning. For instance, the word preregistration has a prefix ‘pre’, ‘rigistra’ is the root and ‘tion’ is the suffix. Both the statistical approach and connection approach came up with generalized imitations from samples of linguistic phenomena. The main difference is that the connection approach integrates statistical learning with other theories of representation. Fei-Fei Li and colleagues integrate NLP in their project which involves the application of algorithm based on machine learning approaches like neural network replica to come up with software which can recognize the setting of performance in still photographs. This software describes the photographs using neural language. This innovation can be applied in the real-world in the cases of robots is capable of coming up with sovereign decisions in unfamiliar situations.
Jessica B.’s Answer:
Artificial Intelligence (AI) is technology designed to perform activities that normally require human intelligence. AI has been applied to Mental Health care field by clinical treatment, assessments, and training. Today there is a steady increase in computer performance and advanced technologies such as virtual reality, computer knowledge acquisition, language processing, sensing, and robotics. For example, PA and ultrasound students at RIT use virtual reality patients for practice before they do it at a clinical setting. The benefit is that students and professionals can still learn when having these advances. Some concerns are saying who is legally responsible there is a mistake or something happens. The goals for NLP are paraphrase an input text, translate text into another language, answer question about context, and draw inferences from text. The two main divisions are language processing and language generation. Language processing is to analyze language for the purpose of producing a meaningful representation (reader/listener). Language generation is the production of language from a representation (writer/speaker). Two levels of NPL are phonology and morphology. Phonology is the interpretation of speech sounds within and across words. For example, there are three types of rules in phonological analysis: (1) Phonetic rules- for sounds within words; (2) Phonemic rules- for variations o pronunciation when words are spoken together; (3) Prosodic rules- for fluctuation in stress ad intonation across a sentence. Morphology deals with the components of language. Example would include prefix, root, and suffix. The statistical approach uses a mathematical technique. The connectionist approach uses statistical learning with various theories of representation. Connectionist uses a theory or model based on data analysis; outcomes of data analysis, a theory are formed. The statistical approach is based on criteria of rules. Image net model became the winning architecture for object recognition. With this huge model, the computer algorithm can now tell you from all the data photos what an object is. The reason for this advancement is to help humans and every form. Doctors will have an extra pair of eyes on patients, cars will run smarter and safer on the roads, and robots can help during natural disasters. These machines will help us discover even more than we imagined.
AI is an abbreviation for artificial intelligence which is the technology that is designed to perform activities that need an equivalence of human intelligence.
AI has many uses in health care systems. It can be used for clinical treatment, consultancy, assessment. It can also be used to detect body temperature or facial expressions and in the clinical decision-making process. As discussed by Luxton in his article, ELIZA is a simulation that used a human-computer interface that imitates empathetic communication and provides formulated responses. Another example is the MeHDES which encodes experts’ knowledge about mental disorders into a knowledge base.
However, dependence on IA can have its complications, especially in psychological practice. Such dependence can eliminate therapeutic bonds between patients and doctors which are found to be important in improving patients’ health. Also, excess dependence will eventually lead to job loss in the health care field simply because jobs can be performed by machines rather than actual employees. It can also be complicated with legal or ethical issues.
Natural Language Processing uses computational techniques to analyze naturally occurring texts at levels of linguistic analysis. Such techniques will make it possible to achieve human-like language processing for many tasks by paraphrasing the input, translating the texts into another language, answering questions about the text and finally drawing inferences from the text. NLP focuses on 1) naturally occurring texts which can be any language that’s used by humans to communicate, 2) levels of linguistic analysis which are types of language processing, 3) human-like language processing to make inferences. There are two divisions of NLP: language processing and language generation. Language processing is used to analyze language and represent it in a meaningful way. Language generation is used to produce a language from representations.
Morphology deals with the components of the language. Morphemes can be bound such as adding “ed” or “s” to the free morphemes such as verbs. Syntactic deals with the grammatical structure of sentences, without proper grammar, sentences can be “nonsense” and hard to understand. Simply, the syntax is the proper use of grammar and using words in the correct order and tense for example saying I have two feet. Improper use of grammar can manifest in children.
Statistical approach and the connectionist approach both use mathematical techniques to develop general linguistic models. However, connectionist models also combine statistical learning with representation theories which can be used to manipulate formulae. Also, connectionist models are less constrained which make them harder to observe.
Fei Fei Li’s idea was to teach the computer how to see “understand” the way children process the outside world during the first developmental years. They collected millions of pictures with all sorts of shapes and positions for a single object, she mentioned cats as an example, so they can use these data to nourish the computer brain. They were able to teach computers to see objects, but that was only the first step. To teach the computer to communicate in sentences, they need to teach computers not only the ability to name objects but also learn from human language sentences.
This technology can help us do our jobs. It can be used in the medical field to diagnose diseases when it sees certain symptoms. It can be used to help warn us in case something is going wrong such as a child drowning. It can help us explore the world in a better way.
Artificial Intelligence is a area of computer science that aims to create intelligent systems that are able to work semi-autonomously in areas like speech and language recognition, learning, and problem solving. There are many areas of study within AI such as machine learning, neural networks models(which falls under machine learning), linear regression models (also under machine learning), decision tree models (again, under machine learning), and many others. Natural Language processing can also fall under machine learning, but can be done procedurally. AI in the health and psychology field is a very challenging topic. Many of the examples in Luxton’s article, while backed by promising if not successful research, never moved into commercial markets. Artificial screen processes have too much of a degree of error and far too much liability to be successful, even with the benefits of cheaper labor and constant access. Tech companies like Microsoft and Comcast have implemented similar units for their technical support resources and they are possibly the worst thing to happen to technical support at this companies from a usability standpoint. If that cannot be done properly, how can we reasonably put lives in the hands artificial doctors and clinicians? While the the idea of “Super Clinicians” is much more reasonable, the technology is not reliable enough for a commercial market and is far too expensive. However, many of these technologies have found their ways, in smaller forms, into devices such as commercial heart rate and bloodoxy monitors. These devices, in some cases, can monitor stats and determine when a patient may be in trouble just before it happens to some degree of certainty. Similarly, we see this technology more and more in smart watches and phone on a consumer level. As with all technology, however, the primary concern especially within artificial intelligence and machine learning is privacy. This sort technology requires large amounts of data in it’s system to be more successful. This is a huge part of why Google has such a successful AI program – they scan and classify almost every piece of data that enters their system. This includes emails on Gmail, text messages if you use their Google Messenger Web, and of course Google Assistant – how else would their natural language processing be so amazing?
Okay – NLP, what is it? NPL is a process in which computers are able to analyze and decode, or encode, natural text. NLP is generally split into two areas, natural language understand and natural language generation. In understanding, the computer must be able to decode and understand language in a deep way. Whereas, in generation, a computer must be able to encode language in a way that will make sense given predefined inputs. Of course, language is complicated, and depending on the context of the problem different levels of NLP may be required. For instance, if doing NLU from voice Phonology level must be used to properly input, processes, and eventually tokenize soundbites in order to eventually combine into words that make sense. Whereas, if doing NLU with text input, likely a Lexical understanding can be used (or a Morphology understanding to capture suffix and prefix value). This method will capture word meanings rather than sound meanings, and then can be moved up the system into a syntactic and semantic understanding.
There are several different approaches that can be used in NLP depending on the context and need of the problem. Two common approaches are the statistical approach and connectionist approach. Both use predefined models of generalized examples of linguistic phenomena to create and understand language. The statistical approach uses a more mathematical based approach using Markov models to language states. While effective, this system is a little more rigid. Conversely, the connectionist approach uses a network model, similar to a neural network to make connections which tends to be more fluid as this allows for the system to change more freely.
Fei Fei Li used a advanced computer vision combine with NLP in a neural network approach to intelligently define what an imagine contains. This process has been adopted by companies like Facebook to create descriptions for images that are uploaded to Facebook or Instagram to help the visually impaired.
These types of technologies are used everywhere, whether or not you realized it. In 2012 I worked on one of the premier Machine Learning/Computer Vision teams at MIT to develop algorithms which could identify moving objects in space and categorize if they had already been found as well as their detailed location and trajectory. This project was extremely successful and classified nearly 1.3 million new and undiscovered objects moving through space that were larger than 3km in diameter.
AI is technology, such as computers, which performs activities that normally require human intelligence to perform. It includes the areas of science dedicated to technological development and study of equipment which copies human neural function without being programmed to do so.
AI has been applied in the mental healthcare system since the 1960’s, with the earliest programs known as ELIZA, an assessment tool, and PARRY, a training tool which mimicked people with schizophrenia. Currently, virtual reality avatars are used in the military to connect service people to health resources. Technology under development according to Luxton is Super Clinician, which could replace a human therapist, and customizable computer games to assist in patient coaching
AI is used in the general healthcare system for clinical decision-making and assists in medication review to identify contraindications. It can assist in problem solving in research and clinical practice. It can help navigate through complex data, allowing providers to focus on relevant information, and it can assist in time management and reducing human error.
Concerns regarding AI in the healthcare system include legal and ethical. Legally, and similar to telemedicine, there may be professional licensure issues to contend with as AI programs cross state jurisdictions. Ethically, AI systems are capable of developing their own values and beliefs, which can lead to judgement errors.
The goal of NLP is to accomplish human-like language processing, and eventually, understanding. The focuses of NLP are language processing and language generation. Language processing equates to the role of reader/listener and it is the process of analyzing language for the purpose of producing a meaningful representation. Language generation equates to the role of writer/speaker and refers to the production of language from a representation.
One level of NLP is Syntactic, which analyzes the words in a sentence to determine the grammatical structure. The output displays the structural dependency of the words and conveys the sentence meaning. Grammar choice impacts the choice of the parser. The example given in Liddy is ‘The dog chased the cat.” The arrangement of the words reveals the meaning of the sentence.
Another level of NLP is Semantic, which determines how words interact with each other in a sentence, based on their meaning. Words that have multiple meanings and require clarification from the rest of the sentence are processed at this level.
The Statistical Approach and the Connectionist Approach are similar in process because they build rules automatically. The basis for evaluation for both approaches is in the form of computed scores, and they are more robust when unexpected input is received due to their knowledge storage systems. One of the differences is in rule construction, which is surface-level for the statistical approach and unrecognizable for the connectionist approach.
NLP was incorporated into the object-naming system described in the TED talk through data collection and analysis, and rule construction, consistent with the symbolic, statistical and connectionist approaches. It has been applied to real-world applications in the form of surgical robotics and the beginnings of self-driving cars.
AI is artificial intelligence that the theory and development of computer systems able to do tasks that have human intelligence – visual perception, speech recognition, decision-making, and translation between languages. For instance, AI has been applied in the mental health care system by providing AI-enabled virtual reality human avatars that had the potential to be used for all other types of person-to-person interactions. In the super clinician, they provided capabilities such as built with advanced sensory that can detect body temperature and able to observe behaviors. Computer games can provide skills training, behavior modeling, and therapeutic distraction that increase engagement of patients and reduce stigma with psychologists. They also have augmented reality that combined with virtual reality with the real world by a graphic that created in live videos.
The benefits of AI in the health care system was AI-enabled kiosk-based can computerized health screening where large numbers of people need to be screened. It saved a lot of times and able to process complex data. They can share data access and provide information faster. In therapeutic computer games, machine learning can help and customizable individual’s needs. However, some of the concerns were patients might develop connections with therapeutic bins and trust AI objects more than they can trust with real humans.
Natural Language Processing is a theory that computer techniques for analyzing and representing naturally texts at one or more levels of linguistic analysis to process like a human. The goal was to paraphrase an input text, translate text into a different language, answer questions (in text content), and draw inferences from the text. The main distinct focus of NLP was using any language or mode to use to communicate with other humans, multiple types of language processing and disciple with AI. Two of seven levels are phonology that interpretation of speech sound with words. In the article indicated “AI-enabled virtual reality human avatars with speech detection and natural language processing technology could also enhance expert systems by providing a human-like verbal dialogue interface.” Pragmatic use context over content to understand something. For instance, the machine from therapeutic computer games learned how to help and customizable individual’s needs. The connectionist approach is to study human cognition that uses a mathematical model that has common information processing and focuses on the increasing strength of association between stimuli and response but not refer abstraction of rule or restructuring. The statistical approach has various mathematical technique but it is part of the study.
Fei Fei Li and her colleagues’ research field was computer vision and machine learning. Their goal was to get a machine to see objects and recognize. They had NLP – pragmatic since they worked on feeding information and trying to get a computer to understand and able to identify and understand objects. Thecomputer vision models were capable of generating a sent