Method and system for optimizing human-computer interaction interface of intelligent cabin based on three-branch decision

A technology of human-computer interaction interface and cockpit, which is applied in the direction of user/computer interaction input/output, computer components, mechanical mode conversion, etc. It can solve the problems of low gesture recognition accuracy and slow recognition speed, and reduce interaction time. Comfortable interactive experience, accurate gesture recognition effect

Active Publication Date: 2018-12-28
CHONGQING UNIV OF POSTS & TELECOMM
View PDF3 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Based on the above problems, using the ability of deep neural network to extract features, combined with multi-granularity information expression and three-branch de

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method and system for optimizing human-computer interaction interface of intelligent cabin based on three-branch decision
  • Method and system for optimizing human-computer interaction interface of intelligent cabin based on three-branch decision
  • Method and system for optimizing human-computer interaction interface of intelligent cabin based on three-branch decision

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0043] Example 1

[0044] The present invention includes the following steps:

[0045] S1. Collect the gesture video in the cockpit and preprocess it to obtain a static gesture image;

[0046] S2. Perform segmentation processing on the gesture and background in the gesture image to obtain a gesture area image;

[0047] S3. Multi-granularity expression of the gesture region image from coarse-grained to fine-grained; using convolutional neural network to extract multi-granularity features of the gesture region image;

[0048] S4. From coarse-grained to fine-grained, calculate the conditional probability of each granularity of the gesture area image classification to each category, and use three decisions to complete the gesture recognition sequentially;

[0049] S5. Perform semantic conversion on the recognized gesture region image, and operate the human-computer interaction interface according to the gesture recognition result after semantic conversion;

[0050] The multi-granularity expre...

Example Embodiment

[0076] Example 2

[0077] On the basis of steps S1 to S5, this embodiment also adds step S6 to obtain the optimal granularity by means of weighted summation, using the optimal granularity as the finest granularity, and repeating steps S3 to S5.

[0078] HMI interface optimization design methods such as Figure 4 As shown, the weighted summation method is used to obtain the final human-computer interaction interface optimization results of each granularity, so as to determine the optimal granularity of the gesture area image. The optimal granularity is used as the finest granularity, and the convolutional neural network is used for the new gesture Extract multi-granularity features and make three decisions sequentially;

[0079] Result=w×Acc+(1-w)×Time

[0080] Time=T 1 +T 2

[0081] Among them, Result is the best granularity of the gesture area image, Acc represents the accuracy of gesture recognition, Time represents the time spent in the gesture recognition process, w represents the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of intelligent driving, and relates to a method and system for optimizing the human-computer interaction interface of an intelligent cabin based on three-branch decision. The method comprises the following steps: collecting gesture video in the cabin, preprocessing, and obtaining gesture image; segmentation of gesture image and background to obtain gesture regionimage; for multi-granularity expression, the multi-granularity feature of gesture region image being extracted by a convolution neural network; from coarse granularity to fine granularity, the imageclassification of each granularity gesture region to the conditional probability of each kind of gesture region being calculated, and the gesture recognition being accomplished sequentially by using three decision-making branches; the recognized gestures being semantically converted, and the human-computer interaction interface operating according to the result of semantic conversion. The best granularity is obtained by weighted summation, and the best granularity is used as the finest granularity. The invention not only can more accurately recognize gestures in the cockpit and execute gesturecommands, but also can reduce the interaction time of the cockpit man-machine interaction interface and provide more comfortable interaction experience for users.

Description

technical field [0001] The invention belongs to the field of intelligent driving, and in particular relates to a method and system for optimizing the human-computer interaction interface of an intelligent cockpit based on three-way decision-making. Background technique [0002] With the development of artificial intelligence and deep learning technology, intelligent driving has attracted the attention of many people. As one of the typical human-computer interaction methods in intelligent driving, gesture recognition is very important to the optimal design of the human-computer interaction (HMI) interface in the cockpit. Accurate and fast gesture recognition can not only provide a more comfortable interactive experience, but also improve the safety of drivers. [0003] Current gesture recognition methods mainly include sensor-based devices and computer vision-based methods. Although the former has a better recognition rate, its cost is relatively high, and the interactive e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F3/01G06K9/00
CPCG06F3/011G06F3/017G06V40/28G06V40/113
Inventor 刘群张刚强王如琪
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products