Information processing method and device, equipment and storage medium
An information processing method and a technology for accessing information, applied in the fields of devices, information processing methods, equipment and storage media, can solve the problem of insufficient accuracy of accessing information, and achieve the effect of improving accuracy and precision
Pending Publication Date: 2022-05-27
BEIJING SENSETIME TECH DEV CO LTD
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0002] In related technologies, based on the text information or image information that appears during the access to the application, the ac...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreMethod used
As shown in Figure 4, for applying the information processing method that the embodiment of the application provides to realize a kind of flow schematic diagram that access information is determined; Here, the information processing method that the embodiment of the application provides by default is an agent inside the electronic device The service (assistant service) is realized; first, start the assistant service, that is, 401, that is, implement the following operations based on the assistant service; 402 is the assistant service related parameters, for example, after starting the assistant service, set the service scope, service duration, and service of the assistant service Application permissions, etc.; secondly, after the service is started, use the assistant service to receive multi-modal information in real time, that is, 403, for example, it can be to obtain data generated by multiple different applications inside the electronic device during the access process, including but not limited to: real-time Communication information, web browsing information, document browsing information, and voice/short message information. And after obtaining the multi-modal information, continue to use the assistant service to process the multi-modal information, that is, 404, including but not limited to: natural language processing, web page information capture, personal document information analysis, speech recognition/understanding, Image information recognition/understanding, behavior interaction recognition/understanding, etc. Finally, the assistant service can be used to extract the data after multimodal information processing based on preset rules or preset key parameters, ie 405 , extract information abstract, and generate final access information 406 from...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreAbstract
The embodiment of the invention discloses an information processing method and device, equipment and a storage medium, and the method comprises the steps: obtaining a to-be-accessed application; in response to an access operation for the to-be-accessed application, determining multi-modal access information associated with the to-be-accessed application; and classifying the multi-modal access information to obtain target access information.
Application Domain
Technology Topic
Image
Examples
- Experimental program(1)
Example Embodiment
[0032] In order to make the purposes, technical solutions and advantages of the embodiments of the present application more clear, the specific technical solutions of the invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are used to illustrate the embodiments of the present application, but are not intended to limit the scope of the embodiments of the present application.
[0033] In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" can be the same or a different subset of all possible embodiments, and Can be combined with each other without conflict.
[0034] In the following description, the term "first\second\third" is only used to distinguish similar objects, and does not represent a specific ordering of objects. It is understood that "first\second\third" Where permitted, the specific order or sequence may be interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
[0035] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field pertaining to the embodiments of the present application. The terms used herein are only for the purpose of describing the embodiments of the present application, and are not intended to limit the embodiments of the present application.
[0036] Before further describing the embodiments of the present application in detail, the terms and terms involved in the embodiments of the present application are described, and the terms and terms involved in the embodiments of the present application are suitable for the following explanations.
[0037] Knowledge Graph: (Knowledge Graph), known as knowledge domain visualization or knowledge domain mapping map in the library and information industry, is a series of various graphs showing the knowledge development process and structural relationship, using visualization technology to describe knowledge resources and their carriers, Mining, analyzing, constructing, mapping and displaying knowledge and the interconnections between them.
[0038] Exemplary applications of the information processing device provided by the embodiments of the present application are described below. The information processing methods provided by the embodiments of the present application can be applied to notebook computers, tablet computers, desktop computers, cameras, and mobile devices (for example, personal computers) with data processing functions. Various types of user terminals, such as digital assistants, dedicated messaging devices, portable game devices, etc., can also be implemented as servers.
[0039] The method can be applied to a computer device, and the functions implemented by the method can be realized by calling a program code by a processor in the computer device. Of course, the program code can be stored in a computer storage medium. It can be seen that the computer device includes at least a processor and a storage medium. medium.
[0040] The embodiments of the present application provide an information processing method, such as figure 1 shown, is a schematic flowchart of the first information processing method provided by the embodiment of the present application; figure 1 The steps shown are explained:
[0041] Step S101, acquiring the application to be accessed.
[0042] In some embodiments, the application to be accessed may be any application that can be run on an electronic device; wherein, the application to be accessed may be classified by application, including network communication applications, word processing applications, video/audio playback applications, Arbitrary applications such as spreadsheet applications can also be classified according to application scenarios, such as teaching applications, life applications, work applications, and entertainment applications.
[0043]In some embodiments, acquiring the application to be accessed may be selected from the electronic device according to a pre-set rule, or in response to the operation of the electronic device by the operating user, determining the application associated with the operation, that is, the application to be accessed. Access the application; in some embodiments, the user may want to watch a video, then start the video playback application in the electronic device, that is, the application to be accessed, or the user needs to start the online teaching application on the electronic device. Learn, and then obtain the application to be accessed, that is, any teaching application.
[0044] In some embodiments, the number of applications to be accessed may be one, or two or more. In the case where the number of applications to be accessed is two or more, the number of applications to be accessed is different from the two or more applications to be accessed. The types are different, for example, it may include instant messaging applications and teaching applications.
[0045] Step S102, in response to an access operation for the application to be accessed, determine multimodal access information associated with the application to be accessed.
[0046] In some embodiments, in response to an access operation for the application to be accessed, multimodal access information associated with the application to be accessed is sensed and acquired; wherein the multimodal access information may be when the application to be accessed is accessed, Obtained and output from the relevant server, or generated information.
[0047] In some embodiments, the multimodal access information includes but is not limited to: text information, image information, video information, and audio information accessed based on the access operation; and may also include some access behaviors for the access information carried in the access operation. Or access events, such as: long-term continuous clicking on access information, periodic repeated viewing, bookmarking access information, saving or downloading access information, etc.
[0048] In some embodiments, the application to be accessed is a web browsing application, and the page content is parsed and recorded through adaptive analysis of the page content, and records based on visiting the browsing page and related access data are recorded, thereby obtaining multiple Modal access information.
[0049] In some embodiments, the access operation of the application to be accessed may be issued by any operation object, such as: any user operating the electronic device, or any device that interacts with the electronic device.
[0050] In some embodiments, when the application to be accessed includes multiple different applications on the electronic device, when the operation object performs related access operations for multiple different applications, multi-modal access information is determined, including but not limited to: personal Instant messaging information, personal web browsing information, personal document browsing record information, personal voice or short message information, etc.
[0051] Step S103: Classify the multimodal access information to obtain target access information.
[0052] In some embodiments, the classification of the multimodal information may be done according to the information type of the multimodal information, or according to the information content of the multimodal information, etc., so as to obtain the target access information. .
[0053] In some embodiments, a preset knowledge graph may be used to classify the multimodal access information to obtain target access information; wherein the knowledge graph may be determined by the application scenario of the application to be accessed.
[0054] In some embodiments, the multimodal information may be parsed first to obtain an analysis result, and then the analysis result may be filled into a preset knowledge graph according to relevant types to obtain target access information. The target access information may also be presented with multimodal information, such as video, audio, text, and the like.
[0055] In some embodiments, the information output or displayed by the application to be accessed during the access process, that is, the multimodal information displayed during the access process, includes but is not limited to: text information, audio information, video information, image information and the access process The access behaviors and access events generated in the process can be classified, and the access can be obtained, that is, the specific and detailed target access information generated between the access object and the application to be accessed during the access process, that is, the interaction process; Target visit information for subsequent management and retrieval of relevant information of this visit.
[0056] In some embodiments, when accessing different applications, all types of information generated by different applications during the access process, that is, multiple modal information, can be obtained in real time, and the multiple modal information can be classified and processed, Then, access intent information corresponding to different applications is obtained; that is, cross-application and cross-modal information collection can be realized, and further collected various modal information of different applications can be displayed or fed back. In this way, the accuracy and precision of determining the access records and access information generated in the process of accessing the application can be improved.
[0057] An embodiment of the present application provides an information processing method, which firstly acquires an application to be accessed; then, in response to an access operation for the application to be accessed, determines multimodal access information associated with the application to be accessed; The information is classified to obtain the target access information. In this way, the accuracy and precision of determining the access records and access information generated in the process of accessing the application can be improved.
[0058] In some embodiments, based on the access operation, information is extracted from the to-be-processed page generated by the application to be accessed in the process of being accessed, so as to obtain the multimodal access information. In this way, richer and more accurate multimodal access information can be obtained. That is, the step S102 provided in the above-mentioned embodiment can be implemented by the following steps S201 to S202, such as figure 2 As shown, a schematic flowchart of the second information processing method provided by the embodiment of the present application, combined with figure 1 and figure 2 The steps shown are described below:
[0059] Step S201 , in response to an access operation for the application to be accessed, acquire a to-be-processed page generated by the application to be accessed during the accessing process.
[0060] In some embodiments, in response to an access operation for the application to be accessed, a to-be-processed page generated or displayed by the application to be accessed during the accessed process is acquired; wherein the to-be-processed page may be the application to be accessed in the electronic process during the accessed process. Relevant information presented on the device's display device. The pages to be processed may change in real time with access operations. And the number of pages to be processed may also be one or two or more.
[0061] In some embodiments, when the number of applications to be accessed is two or more, different applications to be accessed generate different pages to be processed during the accessing process, and the corresponding number of pages to be processed are also different.
[0062] Step S202: Perform information extraction on page information in the to-be-processed page to obtain the multimodal access information.
[0063] In some embodiments, the multi-modal access information is obtained by performing information extraction on the page content in the to-be-processed page, the page background record, and the related access operations on the to-be-processed page. Wherein, the page information in the to-be-processed page can be sensed and stored.
[0064] In some possible implementations, the multimodal access information includes a variety of accessed data, including but not limited to the following, that is, the multimodal access information includes at least one of the following:
[0065] Text information, image information, video information, audio information, access behavior information, access events.
[0066] In some embodiments, the text information, image information, video information, and audio information may be information generated during the process of accessing the application to be accessed, wherein the access behavior information may be the access behavior generated during the process of accessing the application to be accessed, For example: long-time click operation, watching the same video repeatedly, etc.; the access event can be an event generated in the process of accessing the application to be accessed, such as: save, favorite, comment, forward, etc.
[0067] In some embodiments, more abundant access information, that is, multimodal access information, is obtained by acquiring information in multiple different forms.
[0068] In some embodiments, a corresponding knowledge graph is constructed based on the application scenario of the application to be accessed; and then the acquired multimodal access information is classified based on the constructed knowledge graph to obtain more precise and accurate target access information, so as to facilitate subsequent Manage and retrieve relevant access information. That is, before step S103 provided in the above embodiment, the following steps S301 and S302 may also be performed, such as image 3 shown, is a schematic flowchart of the third information processing method provided by the embodiment of the present application, combined with Figure 1 to Figure 3 The steps shown are described below:
[0069] Step S301, determining an application scenario of the application to be accessed.
[0070] In some embodiments, the corresponding application scenario may be determined based on the type of the application to be accessed, or the corresponding application scenario may be determined based on the function of the application to be accessed.
[0071] In some embodiments, if the application to be accessed is a teaching application, the corresponding application scenario is determined to be a learning scenario; if the application to be accessed is an instant messaging application used for conference communication, the corresponding application scenario is determined for the meeting scene.
[0072] Step S302, constructing a knowledge graph matching the application scenario.
[0073] In some embodiments, constructing a knowledge graph matching application scenarios may be generating knowledge graphs corresponding to learning scenarios, conference scenarios, shopping scenarios, entertainment scenarios, and the like.
[0074] In some embodiments, when the application to be accessed is a teaching application, the determined knowledge graph is a teaching system knowledge framework, including but not limited to: different types of knowledge system frameworks, knowledge system frameworks of different user dimensions, and general knowledge systems frame. In the case where the application to be accessed is a conference application, the determined knowledge graph is a conference frame, including but not limited to: a system framework for different participants, a system framework for different positions, and the like.
[0075] In some embodiments, knowledge graphs corresponding to different applications to be accessed may be different.
[0076] Here, the multimodal access information is classified to obtain the target access information, which can be achieved by the following steps S303:
[0077] Step S303: Classify the multimodal information based on the knowledge graph to obtain the target access information.
[0078] In some embodiments, the multimodal information may be classified into positions corresponding to the knowledge graph according to different information contents, so as to obtain the target access information.
[0079] In some embodiments, information processing may be performed on the multimodal access information first, and then the obtained processing results are classified based on the knowledge graph to obtain the target access information. In this way, the internal frame system of the determined target access information can be made more complete, so that the target access information can be managed and retrieved subsequently. That is, the above step S303 can be implemented by the following steps S3031 and S3032 (not shown in the figure):
[0080] Step S3031, performing information processing on the multimodal access information to obtain a processing result.
[0081] In some embodiments, the acquired multi-modal access information may be processed differently based on different modal information in the multi-modal information, so as to obtain corresponding processing results.
[0082]In some possible implementations, when the access operation includes an access sub-operation set, the target access sub-operation may be selected from the access sub-operation set, and then the corresponding intermediate information is filtered from the acquired multimodal information, and Information processing is performed on the intermediate information to obtain the final processing result. In this way, it is possible to perform information screening and processing on the acquired multimodal information, so as to obtain more refined target access information subsequently. That is, in the above step S3031, the information processing is performed on the multimodal access information, and the following process may also be performed before the processing result is obtained:
[0083] In the first step, a target access sub-operation is determined from the access sub-operation set based on a preset access rule.
[0084] In some embodiments, the preset access rule may be set in advance, or may be determined according to attribute information or application scenario of the application to be accessed. Wherein, based on a preset access rule, the target access sub-operation is determined from the access sub-operation set, and operations such as saving, forwarding, and favorite in the access sub-operation set may be determined as the target access sub-operation.
[0085] In the second step, in the multimodal information, intermediate information associated with the target access sub-operation is determined.
[0086] In some embodiments, from the acquired multimodal information, the information associated with the target access sub-operation is determined as the intermediate information, for example, the saved picture is determined as the intermediate information, and the commented video is determined as the intermediate information. The intermediate information may also be multimodal information, that is, may include but not limited to: image information, video information, text information, voice information, behavior information, event information, and the like.
[0087] In some embodiments, when the number of applications to be accessed is two or more and the attribute information of the applications to be accessed is different, the corresponding intermediate information may be obtained from different applications to be accessed, wherein different applications to be accessed are obtained. The corresponding target access sub-operations can be the same or different.
[0088] Here, performing information processing on the multimodal access information to obtain a processing result, that is, the above step S3031 can be implemented through the following process:
[0089] Perform information processing on the intermediate information to obtain the processing result.
[0090] In some embodiments, corresponding processing results are obtained by performing information processing on the filtered intermediate information; in this way, multi-modal information that better matches the access operation can be obtained.
[0091] In some possible implementation processes, the information processing includes, but is not limited to, processing information of various modalities in different ways. In this way, the information processing process can be made more accurate. Here, the above-mentioned information processing includes at least one of the following:
[0092] Text parsing, image recognition, video processing, audio recognition, behavior classification, event analysis.
[0093] Step S3032: Classify the processing result based on the knowledge graph to obtain the target access information.
[0094] In some embodiments, the obtained processing results may be classified and archived based on the determined knowledge graph, so as to obtain target access information.
[0095] In some possible implementations, after classifying the multimodal access information and obtaining the target access information, the target access information may also be displayed, so that the relevant users can obtain the relevant information generated during the access in time. The information processing method provided in the embodiment of the present application may further implement the following steps:
[0096] In response to receiving the presentation instruction, at least one of the target visit information and recommendation information associated with the target visit information is presented.
[0097] In some embodiments, in response to the received display instruction, the display instruction may be issued by the operating object when the electronic device is operated, or may be generated when target access information is generated, or may be generated based on The relevant display commands in the target access information are generated to generate corresponding display commands.
[0098] In some embodiments, the recommended information may be recommended advertisement data associated with the target access information; wherein, when the target access information and the recommended information are displayed, they may be displayed according to preset display rules, and the target access information is also displayed separately. Or the recommended information can be displayed according to the preset display rules.
[0099] Here, in the information processing method provided in the embodiment of the present application, a virtual assistant matching the application to be accessed may also be generated, and the virtual assistant may be adjusted based on the access operation in the access process and the determined multimodal access information, In order to improve the user experience in the process of accessing the application, it can be achieved through the following processes:
[0100] The first step is to generate a preset virtual assistant matching the application to be accessed based on the application scenario of the application to be accessed.
[0101] In some embodiments, a corresponding preset virtual assistant can be generated based on the application scenario of the application to be accessed, such as a learning scenario, such as a virtual student image; it can be generated based on the application scenario of the application to be accessed, such as a shopping scenario, The corresponding preset virtual assistant, such as: virtual shopping cart image.
[0102] In the second step, based on at least one of the access operation and the multimodal access information, the image parameters of the preset virtual assistant are adjusted to obtain and display the target virtual assistant.
[0103] In some embodiments, the image parameters of the preset virtual assistant may be adjusted based on the access operation and/or multi-modal access information of this access process, such as the preset virtual assistant's expression, action, display color, display brightness , adjust the display time, and then obtain a target virtual assistant that dynamically matches the access operation and/or multi-modal access information. Similarly, the target virtual assistant can also be displayed synchronously during the visit, that is, there is an access operation. , the target virtual assistant is displayed dynamically.
[0104] The above information processing method will be described below with reference to a specific embodiment. However, it should be noted that this specific embodiment is only for better illustrating the embodiments of the present application, and does not constitute an improper limitation to the embodiments of the present application.
[0105] In the related art, the following business backgrounds, such as: online retrieval of data scenarios, retrieval of information between different applications in electronic devices, or online or offline interactive learning process, how to help teachers/audience manage their respective information in real time and various statuses methods, as well as the management and coordination of content, behavior, events, and schedules generated by the interaction of mobile devices, such as mobile phones as smart terminals; usually comprehension and analysis of single-modal content, while mainly using natural language processing, text Processing, user intent recognition, etc. to implement user intent and text interaction. This makes the final user intent and text interaction less accurate.
[0106] Based on this, the embodiments of the present application provide an information processing method, which can dynamically acquire multimodal information associated with related applications in real time during the process of accessing related applications, and fuse and analyze the multimodal information to obtain In the process of accessing the application, more abundant and more accurate access information and interaction information can be used, so that the access intent can be more accurately identified. This program is mainly achieved through the following steps:
[0107] The first step is to build an application-driven content or action knowledge graph according to the application scenario corresponding to the accessed application.
[0108] In the second step, in the process of accessing the application, real-time collection of multimodal information generated by the two parties interacting during the visit, including but not limited to: related information generated by the accessed application, such as text information, voice information, image information , video information, etc., as well as actions or events corresponding to access operations.
[0109] The third step is to perform information processing on multi-modal information, mainly including: information processing; information recognition; information analysis; Analysis, image recognition or video processing, behavior classification, event analysis; and then get the corresponding processing results.
[0110] The fourth step is to classify and archive the processing results obtained after the multi-modal information processing according to the knowledge graph determined in the first step, and sequentially establish the final knowledge graph generated during the visit, that is, the target visit information. At the same time, based on the final knowledge graph, the corresponding access information can be quickly retrieved and managed subsequently.
[0111] like Figure 4 As shown, it is a schematic flowchart of implementing a kind of access information determination by applying the information processing method provided by the embodiment of the present application; here, the information processing method provided by the embodiment of the present application may be realized by an agent service (assistant service) inside the electronic device by default. First, start the assistant service, namely 401, that is, the following operations are implemented based on the assistant service; 402 is the related parameters of the assistant service, for example, after starting the assistant service, set the service scope, service duration, service application permissions, etc. of the assistant service; secondly, After the service is started, use the assistant service to receive multi-modal information in real time (403). For example, it can be to obtain data generated by multiple different applications inside the electronic device during the accessing process, including but not limited to: instant messaging information, web browsing information , browse document information and voice/SMS information. And after acquiring the multimodal information, continue to use the assistant service to process the multimodal information, that is, 404, including but not limited to: natural language processing, web page information capture, personal document information analysis, speech recognition/understanding, Image information recognition/understanding, behavior interaction recognition/understanding, etc. Finally, the assistant service can be used to extract data after multimodal information processing based on preset rules or preset key parameters, ie 405 , extract information abstracts, and generate final access information from the information abstracts, ie 406 . Here, the information extraction of 405 can make the obtained data of this service more accurate, so that it can be accessed, managed and retrieved efficiently in the future.
[0112] Likewise, if Figure 5 As shown in the figure, it is a schematic flowchart of implementing another access information determination by applying the information processing method provided by the embodiment of the present application; the information processing method provided by the embodiment of the present application can be used in the online teaching process, for example, when the online teaching application is opened. In the process, the assistant service is used to obtain, process, classify and display multi-modal information in the teaching process, so as to obtain the teaching information with more abundant information and more specific information system in the teaching process.
[0113] First, start the online teaching process of 501, and simultaneously obtain the current teaching content, namely 502, including but not limited to: the teacher's subject identity portrait, the student's identity portrait and the teaching course content in the teaching process; secondly, based on the obtained current teaching content , establishing a corresponding knowledge graph, that is, establishing a knowledge framework 503 : different types of knowledge systems, different user or different dimensional knowledge structures, general information framework structures, and the like.
[0114] Thirdly, during the teaching process, acquire the information 504 of both parties in real time, including but not limited to: interactive question and answer information, student/teacher status information, and teacher course content information; and synchronously process the acquired information on both parties of the interaction, that is, multimodal information That is, 505 , and further classify the acquired multimodal information based on the established knowledge framework to obtain a corresponding teaching aid information set 506 . Here, it can be divided into two categories based on different users (teachers and students): for students 5061, you can get: personalized study notes with emphasis, automatic summary of class highlights, and personal error records; for teachers 5062, You can get: different students' listening records, a summary of classroom interaction information records, a summary of teaching negligence points, a summary of after-school teaching guidance content, etc.
[0115]Finally, information feedback can be performed based on the angle dimension information, that is, execution 507 , information feedback of different dimensions can be performed based on teachers and students, and intelligent data multi-dimensional analysis 508 can also be performed and a teaching or learning assistant report 509 can be generated.
[0116] Based on this, the information processing method provided by the embodiments of the present application can record the records of browsing pages, related pages by adaptively analyzing the content of the page corresponding to the application in the process of accessing or serving the application. behavior and data; and can realize multi-modal information in the process of perceptual access across pages and applications; at the same time, it can process multi-modal information synchronously, including but not limited to: document identification for document content, web page content Text understanding and parsing, identification and classification of access behavior during access, information collection and organization during access, etc. In this way, more accurate and informative access data can be obtained.
[0117] In addition, a virtual assistant corresponding to browsing related pages or accessing related users can also be generated synchronously, and at the same time, the virtual assistant can dynamically adjust its own parameters according to the access behavior and information generated during the access process. And the multimodal information obtained in the access application and the advertisement information associated with the multimodal information can be dynamically displayed according to the relevant rules and/or parameters set in advance.
[0118] The embodiments of the present application provide an information processing device, Image 6 A schematic diagram of the structure and composition of the information processing apparatus provided in the embodiment of the present application, such as Image 6 As shown, the information processing apparatus 600 includes: an acquisition module 601, a determination module 602 and a classification module 603; wherein:
[0119] an obtaining module 601, configured to obtain the application to be accessed;
[0120] A determination module 602, configured to acquire multimodal access information associated with the application to be accessed in response to an access operation for the application to be accessed;
[0121] The classification module 603 is configured to classify the multimodal access information to obtain target access information.
[0122] In some embodiments, the determining module 602 is further configured to, in response to an access operation for the application to be accessed, acquire a page to be processed generated by the application to be accessed during the accessing process; The page information in the information is extracted to obtain the multimodal access information.
[0123] In some embodiments, the multimodal access information includes at least one of the following: text information, image information, video information, audio information, access behavior information, and access events.
[0124] In some embodiments, the information processing apparatus 600 further includes: a building module for determining an application scenario of the application to be accessed; building a knowledge graph matching the application scenario; the categorization module 603, further for classifying the multimodal information based on the knowledge graph to obtain the target access information.
[0125] In some embodiments, the categorization module 603 is further configured to perform information processing on the multimodal access information to obtain a processing result; categorize the processing result based on the knowledge graph to obtain the target access information.
[0126] In some embodiments, when the access operation includes an access sub-operation set, the determining module 602 is further configured to determine a target access sub-operation from the access sub-operation set based on a preset access rule; In the multimodal information, the intermediate information associated with the target access sub-operation is determined; the categorization module 603 is further configured to perform information processing on the intermediate information to obtain the processing result.
[0127] In some embodiments, the information processing includes at least one of the following: text parsing, image recognition, video processing, audio recognition, behavior classification, and event analysis.
[0128] In some embodiments, the information processing apparatus 600 further includes: a presentation module, configured to, in response to receiving a presentation instruction, present at least one of the target access information and recommendation information associated with the target access information.
[0129] In some embodiments, the information processing apparatus 600, an adjustment module, configured to generate a preset virtual assistant matching the application to be accessed based on the application scenario of the application to be accessed; based on the access operation and the At least one of the multimodal access information is used to adjust the image parameters of the preset virtual assistant to obtain and display the target virtual assistant.
[0130] It should be noted that the description of the above device-side embodiment is similar to the description of the above method embodiment, and has similar beneficial effects to the method embodiment. For technical details not disclosed in the device-side embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
[0131] It should be noted that, in the embodiments of the present application, if the above data display method is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or in the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for A computer device (which may be a terminal, a server, etc.) is caused to execute all or part of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: a U disk, a mobile hard disk, a read only memory (Read Only Memory, ROM), a magnetic disk or an optical disk and other mediums that can store program codes. As such, the embodiments of the present application are not limited to any specific combination of hardware and software.
[0132] Correspondingly, the embodiments of the present application further provide a computer program product, the computer program product includes computer-executable instructions, and after the computer-executable instructions are executed, the information processing methods provided by the embodiments of the present application can be implemented.
[0133] Correspondingly, an embodiment of the present application provides a computer device, Figure 7 A schematic diagram of the composition and structure of the computer equipment provided in the embodiments of the present application, such as Figure 7 As shown, the computer device 700 includes: a processor 701 , at least one communication bus 704 , a communication interface 702 , at least one external communication interface and a memory 703 . Among them, the communication interface 702 is configured to realize the connection communication between these components. The communication interface 702 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface. The processor 701 is configured to execute a program in the memory, so as to implement the information processing method provided by the above embodiments.
[0134] Correspondingly, the embodiments of the present application further provide a computer storage medium, where computer-executable instructions are stored thereon, and when the computer-executable instructions are executed by a processor, the information processing methods provided in the foregoing embodiments are implemented.
[0135] The descriptions of the above information processing apparatus and storage medium embodiments are similar to the descriptions of the above method embodiments, and have similar technical descriptions and beneficial effects as the corresponding method embodiments. It is not repeated here. For technical details that are not disclosed in the embodiments of the information processing apparatus and storage medium of the present application, please refer to the description of the method embodiments of the present application for understanding.
[0136] It should be understood that reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic associated with the embodiment is included in at least one of the embodiments of the present application. Thus, appearances of "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily necessarily referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the embodiments of the present application, the size of the sequence numbers of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not be dealt with in this application. The implementation of the embodiments constitutes no limitation. The above-mentioned serial numbers of the embodiments of the present application are only for description, and do not represent the advantages or disadvantages of the embodiments. It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
[0137] In the several embodiments provided by the embodiments of the present application, it should be understood that the disclosed devices and methods may be implemented in other manners. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.
[0138] The unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit; it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
[0139] In addition, each functional unit in each embodiment of the embodiments of the present application may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; The above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units. Those of ordinary skill in the art can understand that all or part of the steps of implementing the above method embodiments can be completed by program instructions related to hardware, the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, the execution includes: The steps of the above method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic disk or an optical disk and other media that can store program codes.
[0140] Alternatively, if the above-mentioned integrated units in the embodiments of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or in the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods described in the various embodiments of the embodiments of this application. The aforementioned storage medium includes various media that can store program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk. The above are only specific implementations of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited thereto. Any changes or substitutions should be included within the protection scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application should be subject to the protection scope of the claims.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more Similar technology patents
Real-time detection system for black tea fermentation process
PendingCN114563401AImprove precision and accuracyImprove accuracyInvestigation of vegetal materialNeural architecturesBlack teasBiotechnology
Owner:TEA RES INST ANHUI ACAD OF AGRI SCI +1
Digital-to-analog converter and control method thereof
ActiveCN109379081ALong-term drift suppressionImprove precision and accuracyDigital-analogue convertorsAnalogue/digital conversion calibration/testingAnalog to digital conversionDigital-to-analog converter
Owner:HEFEI ORIGIN QUANTUM COMP TECH CO LTD
Method for measuring crosslink density of elastomer
ActiveCN106814096AImprove precision and accuracyAnalysis using nuclear magnetic resonanceNuclear magnetic resonanceIndex function
Owner:SHANGHAI NIUMAI ELECTRONICS TECH
Method for improving precision and accuracy of capacitive touch key
Owner:SHANDONG CHAOYUE DATA CONTROL ELECTRONICS CO LTD
Classification and recommendation of technical efficacy words
- Improve precision and accuracy
Method for identifying clothes image, and labeling method and device of clothes image
ActiveCN105469087AReduce distractionsImprove precision and accuracyImage analysisCharacter and pattern recognitionCrucial pointAccuracy and precision
Owner:BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
Apparatus and method for detecting partial discharge of power transformer by utilizing fluorescence fiber
InactiveCN102338843AImprove precision and accuracyStrong anti-electromagnetic interference abilityTesting dielectric strengthPartial dischargeFloating potential
Owner:CHONGQING UNIV
Holographic-image-based streetscape image fragment optimization method
ActiveCN104408689AHigh precisionImprove precision and accuracyImage analysisGeometric image transformationLidar point cloudImage based
Owner:WUHAN UNIV
Detection method for black and white checkerboard image corners based on least square optimization
ActiveCN103996191AImprove precision and accuracySimple algorithmImage analysisCamera lensCheckerboard
Owner:NORTHEASTERN UNIV
Filter bank training method and system and image key point positioning method and system
InactiveCN103955719AImprove precision and accuracyGood distinctionCharacter and pattern recognitionMachine learningFilter bank
Owner:INST OF INFORMATION ENG CAS