Inventor(s)

HP INCFollow

Abstract

This approach utilizes a multi-modal classification machine learning algorithm to suggest user actions based on user attention analysis. The ML model analyzes auditory, visual cues to determine the appropriate action a user is most likely to prefer at any given moment. Auditory speech pattern inferencing in addition to other contextually aware engagement cue inputs are analyzed to determine if a user is focused on using their PC or engaged in other non-PC attentive activities. Data collection of speech patterns, duration, pacing, voice modulation, and keyword usage are indications of user attention and when considered in conjunction with one another provide a strong indication of user action required. When coupled with the auditory cues, visual cues including gaze detection and presence detection enhance the ML’s ability to predict user desire. The analysis and subsequent action score of this data set can be utilized to address user experience enhancements and system power saving opportunities.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.

Share

COinS