Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

303results about How to "Expressive" patented technology

Method and device for restoring signal under speaker cut-off frequency to original sound

The invention relates to an audio signal processing method and an audio signal processing device for restoring a signal under speaker cut-off frequency to the original sound, which aim to extend the equivalent sound range of a small-sized speaker and enhance the expression of low sound. A virtual low-sound harmonic wave sequence is generated by a feedforward self-multiplication operation method and singly by the grain corresponding to each harmonic wave respectively, so that the proportion of the harmonic waves can be freely controlled and the tone can be controlled; the gain applied to the virtual low-sound signal is adjusted by a virtual low-sound signal sound-type control method and according to the impacting process and the releasing process to control the sound types of the virtual low-sound signal so as to control the hearing feeling of the final sound signal according to requirements to make the sound from 'knocking' more abundant; and the virtual low-sound signal processing effect is enhanced by combining techniques such as a low-sound dynamic range compression technique, a power conservation filter technique, a low-sound signal frequency redivision technique and the like, even if under high speaker cut-off frequency, the quality of the original low sound can still be restored by effectively restoring the virtual low-sound signal.
Owner:SHENZHEN FOCALTECH SYST

Text-independent speech conversion system based on HMM model state mapping

The invention discloses a text-independent speech conversion system based on HMM model state mapping, which is composed of a data alignment module, a spectrum conversion model generation module, a rhythm conversion model generation module, an online conversion module and a parameter voice synthesizer; wherein, the data alignment module receives the voice parameters of the source and target speakers, and aligns to the input data according to phoneme information to generate state-aligned data pairs; the spectrum conversion model generation module receives the aligned data pairs and establishes a voice spectrum parameter conversion module based on source and target speakers according to the data; the rhythm conversion model generation module receives the aligned data pairs and establishes a voice rhythm parameter conversion module based on source and target speakers according to the data; the online conversion module obtains the converted voice spectrum parameter and rhythm parameter according to the conversion modules generated by the spectrum conversion model generation module and the rhythm conversion model generation module, and voice data of the source speaker for conversion; the parameter voice synthesizer module receives the converted spectrum information and rhythm information from the online conversion module and outputs the converted voice result.
Owner:北京中科欧科科技有限公司

Intelligent clothing matching system and method aiming at sale terminals

InactiveCN101833731AHuman touch controlEasy to operateCommerceHand partsControl system
The invention relates to intelligent clothing matching system and method aiming at sale terminals. The system comprises a multipoint touch control system (1), an image acquisition device (2) and an intelligent clothing matching system (3); a user applies a plurality of hand gestures to operate a user control interface of the multipoint touch control system, the multipoint touch control system detects a plurality of induction points generated on the same screen in the same time, transmits various information data generated by user control to the intelligent clothing matching system and has the functions of gesture identification, image identification, finger motion trail judgment and multipoint touch control; the image acquisition device (2) is used for acquiring user photographs and transmitting the photographs into the intelligent clothing matching system; the intelligent clothing matching system (3) is used for storing and analyzing the user photographs sent by the image acquisition device, invoking products which are suitable for the user from a product database and displaying on the user control interface, and the user realizes the virtual fitting of the products on a virtual model by finger operation. The invention simplifies and accelerates the product selection process of the user without long-time selection as well as fussy and repeated fitting process of the user.
Owner:翁荣森 +2

Contourlet domain multi-modal medical image fusion method based on statistical modeling

The invention discloses a Contourlet domain multi-modal medical image fusion method based on statistical modeling, mainly for solving the problems of difficulty in balancing spatial resolution and spectrum information during medical image fusion. The realization steps comprise: 1), performing IHS transformation on an image to be fused, and obtaining brightness, tone and saturation; 2), respectively executing Contourlet transformation on a brightness component, and estimating the CHMM parameters of a context hidden Markov model of a high frequency sub-band by use of an EM algorithm; 3), a low frequency sub-band employing a fusion rule of taking the maximum from area absolute value sums, and the high frequency sub-band designing a fusion rule based on a CHMM and an improved pulse coupling nerve network M-PCNN; 4), a high frequency coefficient and a low frequency coefficient after fusion executing Contourlet inverse transformation to reconstruct a new brightness component; and 5), obtaining a fusion image by use of IHS inverse transformation. The method provided by the invention can fully integrate the structure and function information of a medical image, effectively protects image details, improves the visual effect, and compared to a conventional fusion method, greatly improves the quality of a fusion image.
Owner:JIANGNAN UNIV

Lithium battery state-of-charge prediction method based on improved generative adversarial network

ActiveCN111007399ANash EquilibriumBoost discriminative samplesElectrical testingNeural architecturesAlgorithmGenerative adversarial network
The invention provides a lithium battery state-of-charge prediction method based on an improved generative adversarial network. The lithium battery state-of-charge prediction method comprises the following steps: collecting modal parameters of a lithium battery and a real state-of-charge SOC in a lithium battery sample; estimating a lower bound value of mutual information between the generation model G output G (z, c) and the condition variable c by using a regression model R; enabling the generation model G and a discrimination model D to confront each other to achieve Nash equilibrium; generating a sample by utilizing the generation model G, and adding the sample into a training set used by the regression model R for training; and alternately training the generation model G, the discrimination model D and the regression model R to enable each model to tend to converge. According to the invention, the training set conforming to original distribution is expanded by using the generationmodel; at the same time, in the improved generative adversarial network, two activation functions of a random correction linear unit RReLU and an exponential linear unit Exponential Units (ELU) are used to obtain stronger model expressive force, and the nonlinear characteristics of the lithium battery are better learned.
Owner:ZHEJIANG UNIV

Book VR and AR experience interactive system

The invention discloses a book VR and AR experience interactive system. The system comprises a content database, an image recognition module, a dynamic storage module, a big data analysis module and a dynamic display module, wherein all books, all book pictures, a content input model, book data, VR and AR resource data and the like are stored in the content database; the image recognition module recognizes basic information and feature properties of the book pictures and performs image recognition on the book pictures; the dynamic storage module processes the book pictures according to the feature properties corresponding to the book pictures and stores the book data; the big data analysis module analyzes common categories of identical feature properties to form a general content input model; and the dynamic display module extracts identical book data and content data of the content input model for display. According to the system, the books are displayed in different modes such as VR and AR, the display modes are more diversified, and the user experience effect is better; paper media are digitalized, so that the interactivity and interestingness in the reading process are enhanced, and generalization and standardization of the book data are achieved.
Owner:JINHUA VRSEEN TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products