Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

40results about How to "Reduce the number of dispatches" patented technology

Method and device for adjusting maximum repeat times of hybrid automatic repeat request (HARQ) and base station

The invention provides a method and a device for adjusting the maximum repeat times of a hybrid automatic repeat request (HARQ) and a base station, so as to solve the problems that when different users use the same HARQ maximum repeat times for transmission, packet loss is likely to be caused, redundancy of the scheduling times happens and the resource utilization rate is low. The method comprises steps: under the current HARQ maximum repeat times, HARQ transmission statistics information for a wireless transmission channel of a user is acquired, wherein the HARQ transmission statistics information comprises the number of data packets with failed transmission by the wireless transmission channel of the user and the HARQ repeat times used in the case of successful transmission of each data packet; and according to the HARQ transmission statistics information, the HARQ maximum repeat times are updated. According to the number of data packets with failed transmission by the wireless transmission channel of the user and the HARQ repeat times used in the case of successful transmission of each data packet, the HARQ maximum repeat times are adjusted, and on the premise of reducing the data packet loss rate, the resource utilization rate is improved.
Owner:ZTE CORP

Uplink GBR (Guaranteed Bit Rate) service scheduling method and base station

The invention provides an uplink GBR (Guaranteed Bit Rate) service scheduling method and a base station. The method comprises that a base state presets the maximum data size corresponding to a MBR (Maximum Bit Rate) and the minimum data size corresponding to the GBR in a unit period for the GBR service of a terminal; the obtained GBR data size sent by an uplink data transmission subframe of the terminal during a current period is counted; whether or not the air interface resource scheduling is carried out is determined according to the maximum data size, the minimum data size and the GBR data size; and if the air interface resource scheduling is determined to be carried out, corresponding air interface resources are granted according to a current to-be-transmitted GBR data size obtained from the BSR (Buffer Status Report) reported by the terminal. According to the invention, the base station determines whether or not the terminal carries out the air interface resource scheduling according to the actually-transmitted GBR data size, the maximum data size and the minimum data size of the terminal during the current period, and when the scheduling is needed, the air interface resources corresponding to the current to-be-transmitted GBR data size are granted to the terminal, so that the scheduling frequency is lowered, and the occupation of CCE (Control Channel Element) resources is reduced.
Owner:CHENGDU TD TECH LTD

CFQ (complete fair quenching) dispatching method

The invention provides a CFQ (complete fair quenching) dispatching method, and relates to the field of linux kernel I/O (input/output) dispatching. The CFQ dispatching method comprises the following steps: firstly, dispatching among groups; then, dispatching in groups; and finally, dispatching in queues; making a request to the kernel to modify relevant marks and switch between a dispatching way among the groups and a dispatching way in the groups when intensive I/O operation is needed to be performed on application programs, modifying an elevator algorithm bound adopted for dispatching in queues into a frequent access area. When the Intense mark of a certain group is set on 1, the priority of the equipment corresponding to the group is the highest, and the dispatching among the groups is reduced; when the Continue mark of a certain group is set on 1, queues of the group are distinguished into different queues according to the area requested by I/O, so that the I/P intensive request can be possibly completed in one queue; and when the Continue mark of a certain group is set on 1, the operation bound of the elevator dispatching algorithm in the queues is changed to head and tail, so that the intensive I/O request is accessed in a centralized manner, and therefore, the round times for elevator dispatching are reduced.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Flow prediction multi-task model generation method, scheduling method, device and equipment

PendingCN114202130AReduce wasteReduce the number of times the quantity cannot meet the demandForecastingResourcesData miningOperations research
The embodiment of the invention discloses a flow quantity prediction multi-task model generation method, a scheduling method, a device and equipment. A specific embodiment of the method comprises the steps of obtaining a historical order information set and a historical value reduction information set of a target article in a target historical time period; based on each order date and a historical value reduction information set included in the historical order information set, performing feature processing on the historical order information set and the historical value reduction information set to obtain a processed historical order information set as a sample historical order information set; and generating a flow prediction multi-task model according to a preset loss function and each value reduction flow feature, non-value reduction flow feature and value reduction feature included in the sample historical order information set. According to the embodiment, the accuracy of the flow volume prediction result is improved, and the waste of transportation resources is reduced.
Owner:BEIJING JINGDONG ZHENSHI INFORMATION TECH CO LTD

Method and device for determining power authorization

The invention discloses a method and a device for determining power authorization. The method includes the steps: acquiring a first data volume to be currently transmitted of UE (user equipment) by base station equipment, and acquiring a second data volume capable of being transmitted by the UE within a TTI (transmission time interval); determining the number of first MAC-d PDUs (protocol data units) by the base station equipment according to the first data volume, and determining the number of second MAC-d PDUs by the base station equipment according to the second data volume; determining a modified TBS value of a transport block size by the base station equipment according to the number of the second MAC-d PDUs and determining the power authorization corresponding to the UE by the base station equipment according to the modified TBS value when the number of the second MAC-d PDUs is smaller than that of the first MAC-d PDUs; determining the modified TBS value by the base station equipment according to the number of the first MAC-d PDUs and determining the power authorization corresponding to the UE by the base station equipment according to the modified TBS value when the number of the second MAC-d PDUs is larger than or equal to that of the first MAC-d PDUs. By the aid of the method and the device, dispatching efficiency and transport efficiency can be improved, resource utilization rate is increased, and the throughput capacity of a cell is improved.
Owner:DATANG MOBILE COMM EQUIP CO LTD

Multi-modal massive-data-flow scheduling method under multi-core DSP

The invention discloses a multi-modal massive-data-flow scheduling method under a multi-core DSP. The multi-core DSP includes a main control core and an acceleration core. Requests are transmitted between the main control core and the acceleration core through a request packet queue. Three data block selection methods of continuous selection, random selection and spiral selection are determined onthe basis of data dimensions and data priority orders. Two multi-core data block allocation methods of cyclic scheduling and load balancing scheduling are determined according to load balancing. Datablocks selected and determined through a data block grouping method according to allocation granularity are loaded into multiple computing cores for processing. The method adopts multi-level data block scheduling manners, satisfies requirements of system loads, data correlation, processing granularity, the data dimensions and the orders when the data blocks are scheduled, and has good generalityand portability; and expands modes and forms of data block scheduling from multiple levels, and has a wider scope of application. According to the method, a user only needs to configure the data blockscheduling manners and the allocation granularity, a system automatically completes data scheduling, and efficiency of parallel development is improved.
Owner:XIAN MICROELECTRONICS TECH INST

Cluster distributed resource scheduling method, device and equipment and storage medium

The invention discloses a cluster distributed resource scheduling method. The method comprises the steps of respectively acquiring the CPU utilization rate of each host and the CPU utilization rate ofeach virtual machine; quantizing the CPU utilization rate of each host and the CPU utilization rate of each virtual machine to obtain a quantized pressure value of each host and a quantized pressurevalue of each virtual machine; acquiring each historical host pressure value and each historical virtual machine pressure value; calculating a target CPU pressure value corresponding to each host according to the host quantized pressure value and the historical host pressure value corresponding to each host, and the virtual machine quantized pressure value and the historical virtual machine pressure value corresponding to each virtual machine in each host; and performing thermal migration operation on each virtual machine in each host according to each target CPU pressure value. According to the method, the pressure of network resources is reduced, and the probability of memory data loss caused by frequent thermal migration of the virtual machine is reduced. The invention further disclosesa cluster distributed resource scheduling device and equipment and a storage medium, which have corresponding technical effects.
Owner:LANGCHAO ELECTRONIC INFORMATION IND CO LTD

Skill scheduling method and system for voice dialogue platform

The embodiment of the invention provides a skill scheduling method for a voice dialogue platform. The method comprises the following steps: a central control scheduling service receives a semantic result of user voice; the central control scheduling service schedules a plurality of skill services related to the semantic result in parallel to obtain analysis results fed back by the plurality of skill services; the plurality of analysis results are sorted based on the priority of the skill service, and the skill analysis result with the highest priority is exported to the skill realization discrimination service; and when the feedback of the skill realization discrimination service fails to be realized, the skill analysis result with the highest priority is selected from the remaining skillanalysis results, and the skill analysis result is exported to the skill realization discrimination service, and when the feedback of the skill realization discrimination service is successfully realized, the skill analysis result with the highest priority is sent to the data distribution service so as to feed back the skill analysis result to the user. The embodiment of the invention further provides a skill scheduling system for the voice dialogue platform. According to the embodiment of the invention, skill scheduling efficiency is improved, delay is reduced, and user experience is improved.
Owner:AISPEECH CO LTD

Data transmission method and device of 5G system

The invention relates to the field of communication, in particular to a data transmission method and device of a 5G system. The data transmission method is used for reducing the load of the system andimproving the processing efficiency, and comprises the steps of: determining a number of data packets to be sent, and determining a threshold value interval corresponding to the current load of the system; then converging the data packets to be sent according to a preset threshold value when determining that the number of the data packets reaches the preset threshold value corresponding to the threshold value interval, and generating a converged data packet; and finally, sending the converged data packet to a receiver, and triggering the receiver to analyze the converged data packet. Thus, the data packets to be sent can be converged into the converged data packet and then sent to the receiver, the message primitive processing number of protocol interlayer interaction is reduced, the header adding process is simplified, the convergence condition is accurately limited, the effectiveness of the processing result is ensured, the load of a system processor is reduced, the processing efficiency is improved, the scheduling frequency is reduced, the operation overhead of the system is reduced, and the resource consumption is reduced.
Owner:DATANG MOBILE COMM EQUIP CO LTD

Shared bicycle flowing system and automatic scheduling system and method based on sub-region division

The invention provides a shared bicycle flowing system and an automatic scheduling system and method based on sub-region division. The shared bicycle flowing system comprises overground conveying devices arranged at bicycle taking and placing points, underground conveying devices connected with the overground conveying devices at the bicycle taking and placing points, and a multi-layer storage device capable of providing bicycles for the overground conveying devices or the underground conveying devices. And the adjacent bicycle taking and placing points and the overground storage devices are connected through the overground conveying devices or the underground conveying devices to form a mobile conveying network of the shared bicycles. Proposed flow system, linkage of stations in a certainarea is realized; the demand quantity of each station is predicted through a comprehensive demand prediction method, then dynamic subarea division is performed to form a demand scheduling scheme of each station in subareas, finally, the mobile system realizes automatic transportation of shared bicycles according to the scheduling scheme, and when a user has a demand, an efficient and convenient bicycle access service is provided for the user to the maximum extent.
Owner:SHANDONG JIAOTONG UNIV +2

Water surface trash cleaning ship for environment management

PendingCN113832934AVersatileMake up for collection width limitationsBatteries circuit arrangementsWater cleaningOil canRefuse collection
The invention discloses a water surface trash cleaning ship for environment management. A trash cleaning device and a floating oil recovery device are arranged on a ship body, a water surface fence collection device is arranged at the front end of a trash collection device and comprises two traction ships and a plurality of buoys which are connected with the traction ships, the traction ships take storage batteries as power and are provided with automatic navigation charging systems, and correspondingly, the floating oil recovery device is provided with a traction ship charging device. Compared with the prior art, the water surface trash cleaning ship for environment management has the advantages of comprehensive function, multiple purposes, economy and high efficiency, and water surface trash and floating oil can be cleaned at the same time. The water surface fence collection device is additionally arranged and is suitable for operation in narrow water areas and shallow water where existing ships cannot reach; and a charging dock is arranged on the main ship body, the automatic navigation control systems and electric quantity detection systems are arranged on the traction ships and are used for automatic navigation and charging of the traction ships, and the traction ships do not need to be salvaged and recycled manually.
Owner:青岛瑞龙科技有限公司

Scheduling method and base station for uplink gbr service

The invention provides an uplink GBR (Guaranteed Bit Rate) service scheduling method and a base station. The method comprises that a base state presets the maximum data size corresponding to a MBR (Maximum Bit Rate) and the minimum data size corresponding to the GBR in a unit period for the GBR service of a terminal; the obtained GBR data size sent by an uplink data transmission subframe of the terminal during a current period is counted; whether or not the air interface resource scheduling is carried out is determined according to the maximum data size, the minimum data size and the GBR data size; and if the air interface resource scheduling is determined to be carried out, corresponding air interface resources are granted according to a current to-be-transmitted GBR data size obtained from the BSR (Buffer Status Report) reported by the terminal. According to the invention, the base station determines whether or not the terminal carries out the air interface resource scheduling according to the actually-transmitted GBR data size, the maximum data size and the minimum data size of the terminal during the current period, and when the scheduling is needed, the air interface resources corresponding to the current to-be-transmitted GBR data size are granted to the terminal, so that the scheduling frequency is lowered, and the occupation of CCE (Control Channel Element) resources is reduced.
Owner:CHENGDU TD TECH LTD

Prediction and scheduling method of electric power system

The invention discloses a prediction and scheduling method for an electric power system, and particularly relates to the field of electric power management. The system comprises a data exchange center, an internal communication system, an electric power production system and an electric power transmission system, wherein the data exchange center comprises a communication scheduling subsystem, a cooperative command subsystem, an analog quantity measuring and calculating system, an instruction transmission subsystem and a GPS positioning system, and a data interaction end of the communication scheduling subsystem is in signal connection with a data interaction end of the cooperative command subsystem. According to the system, the analog quantity measuring and calculating system is designed to measure and calculate the dispatched electric power to measure and calculate the actual dispatched effective electric power and the theoretical dispatched electric power, and the data results measured and calculated for many times are recorded and calculated to serve as a standard template for subsequent dispatching, and the template can be updated in real time, an error between each measurement result and an actual result is reduced, so a user can more intuitively measure a value needing to be reported when applying for power dispatching.
Owner:JIANGSU ELECTRIC POWER CO +1

Inventory scheduling method and device

The invention discloses an inventory scheduling method and device, and relates to the technical field of computers. According to one specific embodiment, the method comprises: calculating one or more warehouse scheduling relations of commodities under a constraint condition set when the number of total scheduling times of the commodities is minimum, wherein the warehouse scheduling relations comprise warehouse calling-out, warehouse calling-in, scheduling batches and the initial scheduling number; for each warehouse scheduling relationship, optimizing the initial scheduling quantity according to the current inventory of the commodities in each warehouse and the predicted sales volume of the commodities in the future N unit time to obtain the actual scheduling quantity of the commodities corresponding to the warehouse scheduling relationship; and outputting the scheduling relationship of each warehouse so as to perform commodity scheduling according to the actual scheduling quantity. According to the embodiment, each SKU batch can be considered, and fine-grained inventory scheduling is carried out, so that the batch difference of the commodities between the warehouses is not greater than a given requirement, the batch balance requirement is fully met, the scheduling times are as small as possible, and the scheduling times can be minimum in a period of time in the future.
Owner:BEIJING JINGDONG ZHENSHI INFORMATION TECH CO LTD

A multi-modal scheduling method for massive data streams under multi-core DSP

The invention discloses a multi-modal massive-data-flow scheduling method under a multi-core DSP. The multi-core DSP includes a main control core and an acceleration core. Requests are transmitted between the main control core and the acceleration core through a request packet queue. Three data block selection methods of continuous selection, random selection and spiral selection are determined onthe basis of data dimensions and data priority orders. Two multi-core data block allocation methods of cyclic scheduling and load balancing scheduling are determined according to load balancing. Datablocks selected and determined through a data block grouping method according to allocation granularity are loaded into multiple computing cores for processing. The method adopts multi-level data block scheduling manners, satisfies requirements of system loads, data correlation, processing granularity, the data dimensions and the orders when the data blocks are scheduled, and has good generalityand portability; and expands modes and forms of data block scheduling from multiple levels, and has a wider scope of application. According to the method, a user only needs to configure the data blockscheduling manners and the allocation granularity, a system automatically completes data scheduling, and efficiency of parallel development is improved.
Owner:XIAN MICROELECTRONICS TECH INST

Method and device for determining power authorization

The invention discloses a method and a device for determining power authorization. The method includes the steps: acquiring a first data volume to be currently transmitted of UE (user equipment) by base station equipment, and acquiring a second data volume capable of being transmitted by the UE within a TTI (transmission time interval); determining the number of first MAC-d PDUs (protocol data units) by the base station equipment according to the first data volume, and determining the number of second MAC-d PDUs by the base station equipment according to the second data volume; determining a modified TBS value of a transport block size by the base station equipment according to the number of the second MAC-d PDUs and determining the power authorization corresponding to the UE by the base station equipment according to the modified TBS value when the number of the second MAC-d PDUs is smaller than that of the first MAC-d PDUs; determining the modified TBS value by the base station equipment according to the number of the first MAC-d PDUs and determining the power authorization corresponding to the UE by the base station equipment according to the modified TBS value when the number of the second MAC-d PDUs is larger than or equal to that of the first MAC-d PDUs. By the aid of the method and the device, dispatching efficiency and transport efficiency can be improved, resource utilization rate is increased, and the throughput capacity of a cell is improved.
Owner:DATANG MOBILE COMM EQUIP CO LTD

Tower type photo-thermal power station heliostat scheduling method based on heat absorber temperature control

The invention belongs to the technical field of tower-type solar photo-thermal power generation, and particularly relates to a tower-type photo-thermal power station heliostat scheduling method based on heat absorber temperature control, which comprises the following steps: firstly, obtaining the current parameters and state of a heat absorber and the influence parameters of solar radiation, meteorological parameters and cloud in the current and short period; calculating temperature-related parameters corresponding to each time sequence within a certain time and judging whether the temperature-related parameters exceed the limit or not; if not, determining each heliostat target point of each time sequence; and if the over-limit condition exists, calculating whether the temperature-related parameters exceed the limit or not according to calculation aiming points obtained by gradually moving the heliostats from the initial aiming points to the edge of the heat absorber until the temperature-related parameters do not exceed the limit any more, then confirming the target points of the heliostats in the time sequences, and scheduling the heliostats. According to the method, the target point of the heliostat is controlled and the heliostat is scheduled based on the temperature related parameters of the heat absorber, so that the temperature of the heat absorber can be ensured to be controllable under various normal steady-state working conditions, starting, stopping, preheating and changing working conditions that a mirror field is influenced by cloud and the like.
Owner:DONGFANG BOILER GROUP OF DONGFANG ELECTRIC CORP

Method and device for cluster management task scheduling

The invention relates to the field of node allocation, in particular to a cluster management task scheduling method, which can realize reasonable allocation of limited computing power resources between a user and a task as much as possible through reference and arrangement allocation of multiple dimensions such as task priority, user right number, running time length and user utilization rate. And the distribution balance of internal resources is realized. The method has the advantages that compared with traditional scheduling, the problem of butt joint coupling of a scheduling module and other modules in a traditional module is solved through independent scheduling module design, and a program is more flexible and easy to expand. A two-in-one mode of interruption and starting logic is adopted, task states, newly-built tasks, interrupted tasks or restarted tasks do not need to be concerned from the scheduling level, only results need to be calculated, the independence of a task scheduling module is improved from the bottom layer design, and the task scheduling efficiency is improved. Conditions are provided for more subsequent users, a larger cluster and a more complex platform service scene.
Owner:杭州幻方人工智能基础研究有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products