Communication method and device
A communication device and communication connection technology, which is applied in the field of visual networking, can solve the problems that the alarm equipment cannot carry out real-time video communication, and the on-site personnel cannot be evacuated or evacuated.
Inactive Publication Date: 2019-04-16
VISIONVERA INFORMATION TECH CO LTD
5 Cites 6 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0004] However, the alarm equipment in the prior art cannot perform real-time video communication, resulting in the commander being unable to effe...
Method used
Based on the above-mentioned characteristics of the Internet of Vision, the communication scheme of the embodiment of the present application is proposed, following the protocol of the Internet of Vision, a call connection will be established between the first terminal and the second terminal that sends the alarm command, which can be reduced in an emergency. small loss.
Specifically, the message transponder will convert the alarm instruction into an alarm instruction conforming to the standard protocol of the Internet of Vision, and in the conversion process, the noise of the transmission data can be removed by demodulating and regenerating the signal, thereby improving the entire communication chain of the Internet of Vision communication quality of the road.
The structural design of the Internet of View completely eradicates the network security problems that plague the Internet through the separate licensing system for each service, complete isolation of equipment and user data, etc.,...
Abstract
The application provides a communication method and device applied to the internet of videos. The method comprises the following steps: when detecting an emergency event, a first terminal sends an alarm instruction to a GIS application platform; the GIS application platform sends a video telephone request to a second terminal according to the alarm instruction; and the second terminal establishescommunication connection with the first terminal according to the video telephone request. By adopting the scheme provided by the application, the communication connection can be established between the first terminal for sending the alarm instruction and the second terminal, so that the first terminal and the second terminal can perform real-time video or audio communication; the command staff can command the field personnel according to the field actual condition when encountering the fire disaster, the flood hazard and like emergency events; and the personnel can be emergently evacuated through a broadcast way at the field, thereby reducing the loss.
Application Domain
Two-way working systemsTransmission +1
Technology Topic
Flood hazardThe Internet +5
Image
Examples
- Experimental program(1)
Example Embodiment
[0051] In order to make the above objectives, features and advantages of the application more obvious and understandable, the application will be further described in detail below in conjunction with the drawings and specific implementations.
[0052] Video networking is an important milestone in network development. It is a real-time network that can realize real-time transmission of high-definition video, and push many Internet applications to high-definition video, high-definition face-to-face.
[0053] Video networking adopts real-time high-definition video exchange technology, which can provide required services on a network platform, such as high-definition video conferencing, video surveillance, intelligent monitoring and analysis, emergency command, digital broadcast TV, delayed TV, online teaching, and live broadcast Dozens of video, voice, picture, text, communication, data and other services such as VOD, VOD, TV mail, personalized recording (PVR), intranet (self-organized) channels, intelligent video broadcast control, information release, etc. are all integrated in one System platform, through TV or computer to achieve high-definition quality video playback.
[0054] In order to enable those skilled in the art to better understand the embodiments of this application, the following introduces video networking:
[0055] Some of the technologies applied by the network are as follows:
[0056] Network Technology
[0057] The network technology innovation of the Vision Network has improved the traditional Ethernet (Ethernet) to face the potentially huge video traffic on the network. Different from pure network packet switching (Packet Switching) or network circuit switching (CircuitSwitching), depending on the networking technology, Packet Switching meets the requirements of Streaming. Visual networking technology has the flexibility, simplicity and low price of packet switching, as well as the quality and safety of circuit switching, and realizes the seamless connection of the entire network switched virtual circuit and data format.
[0058] Switching Technology
[0059] Video networking adopts the two advantages of Ethernet's asynchronous and packet switching, eliminating Ethernet defects under the premise of full compatibility, and has a seamless end-to-end connection across the entire network, direct user terminals, and directly carry IP data packets. User data does not need any format conversion within the entire network. Video networking is a more advanced form of Ethernet. It is a real-time exchange platform that can realize the large-scale real-time transmission of high-definition video on the entire network that the Internet cannot currently achieve, and push many network video applications to high-definition and unified.
[0060] Server Technology
[0061] The server technology on the video network and unified video platform is different from the server in the traditional sense. Its streaming media transmission is built on a connection-oriented basis. Its data processing capability has nothing to do with traffic and communication time. A single network layer can contain information. Order and data transmission. For voice and video services, the complexity of streaming media processing on video networking and unified video platforms is much simpler than data processing, and the efficiency is greatly improved by more than a hundred times than traditional servers.
[0062] Storage Technology
[0063] The ultra-high-speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content of ultra-large capacity and ultra-large traffic. The program information in the server instructions is mapped to the specific hard disk space, and the media content no longer passes through the server. It is sent directly to the user terminal in an instant, and the user's waiting time is generally less than 0.2 seconds. The optimized sector distribution greatly reduces the mechanical movement of the hard disk head seeking. Resource consumption only accounts for 20% of the IP Internet of the same level, but it generates concurrent traffic that is 3 times larger than the traditional hard disk array, and the overall efficiency is increased by more than 10 times.
[0064] Network Security Technology
[0065] The structural design of the visual network completely eliminates the network security problems that plague the Internet through the separate permission system for each service and the complete isolation of equipment and user data. Generally, anti-virus programs and firewalls are not required to prevent hackers and virus attacks. , Provide users with a structured worry-free and safe network.
[0066] Service Innovation Technology
[0067] The unified video platform integrates business and transmission. Whether it is a single user, private network user, or a network, it is just an automatic connection. The user terminal, set-top box or PC is directly connected to the unified video platform to obtain a variety of multimedia video services. The unified video platform adopts a "recipe style" table configuration model to replace traditional complex application programming, which can realize complex applications with very little code and realize "unlimited" new business innovation.
[0068] The networking of depending on the network is as follows:
[0069] Visual networking is a centralized control network structure. The network can be a tree network, a star network, a ring network, etc., but on this basis, a centralized control node is required to control the entire network.
[0070] Such as figure 1 As shown, the visual network is divided into two parts: an access network and a metropolitan area network.
[0071] The equipment of the access network can be divided into three categories: node servers, access switches, and terminals (including various set-top boxes, encoding boards, storage, etc.). The node server is connected to the access switch, and the access switch can be connected to multiple terminals and can be connected to the Ethernet.
[0072] Among them, the node server is a node with a centralized control function in the access network, and can control the access switch and the terminal. The node server can be directly connected to the access switch or directly connected to the terminal.
[0073] Similarly, the devices of the metropolitan area network can also be divided into three categories: metropolitan area servers, node switches, and node servers. The metropolitan area server is connected to the node switch, and the node switch can be connected to multiple node servers.
[0074] Among them, the node server is the node server of the access network part, that is, the node server belongs to both the access network part and the metropolitan area network part.
[0075] The metropolitan area server is a node with a centralized control function in the metropolitan area network, and can control the node switch and the node server. The metropolitan area server can be directly connected to the node switch or the node server.
[0076] It can be seen that the entire Visionlink network is a hierarchical and centralized control network structure, and the network controlled by the node server and the metropolitan area server can have various structures such as tree, star, and ring.
[0077] It is vividly stated that the access network part can form a unified video platform (the part in the dashed circle), and multiple unified video platforms can form a video network; each unified video platform can be interconnected through metropolitan and wide-area video networks.
[0078] 1. According to the classification of networked devices
[0079] 1.1 The video networking devices in the embodiments of the present application can be mainly divided into three categories: servers, switches (including Ethernet gateways), terminals (including various set-top boxes, encoding boards, storage, etc.). The visual network can be divided into metropolitan area network (or national network, global network, etc.) and access network as a whole.
[0080] 1.2 The equipment of the access network part can be mainly divided into three categories: node servers, access switches (including Ethernet gateways), terminals (including various set-top boxes, encoding boards, storage, etc.).
[0081] The specific hardware structure of each access network device is:
[0082] Node server:
[0083] Such as figure 2 As shown, it mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
[0084] Among them, the incoming packets from the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 checks the address table 205 for the incoming packets to obtain the packet orientation information; The packet orientation information stores the packet in the queue of the corresponding packet buffer 206; if the queue of the packet buffer 206 is nearly full, discard it; the switching engine module 202 polls all packet buffer queues, and forwards it if the following conditions are met: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero. The disk array module 204 mainly implements the control of the hard disk, including the initialization and reading and writing of the hard disk; the CPU module 203 is mainly responsible for the protocol processing between the access switch and the terminal (not shown in the figure), and the address table 205 (Including the configuration of the downstream protocol packet address table, the upstream protocol packet address table, and the data packet address table) and the configuration of the disk array module 204.
[0085] Access switch:
[0086] Such as image 3 As shown, it mainly includes a network interface module (downlink network interface module 301, uplink network interface module 302), switching engine module 303, and CPU module 304;
[0087] Among them, the packet (uplink data) from the downstream network interface module 301 enters the packet inspection module 305; the packet inspection module 305 detects whether the destination address (DA), source address (SA), data packet type and packet length of the packet meet the requirements, if If it matches, the corresponding stream identifier (stream-id) is allocated and enters the switching engine module 303, otherwise it is discarded; the packet (downlink data) from the uplink network interface module 302 enters the switching engine module 303; the data packet from the CPU module 204 Enter the switching engine module 303; the switching engine module 303 performs the operation of checking the address table 306 on the incoming packet to obtain the packet orientation information; if the packet entering the switching engine module 303 goes from the downstream network interface to the upstream network interface, then combine The stream identifier (stream-id) stores the packet in the queue of the corresponding packet buffer 307; if the queue of the packet buffer 307 is close to full, it is discarded; if the packet entering the switching engine module 303 is not a downstream network interface, go upstream If the network interface goes, the data packet is stored in the queue of the corresponding packet buffer 307 according to the packet orientation information; if the queue of the packet buffer 307 is nearly full, it is discarded.
[0088] The switching engine module 303 polls all packet buffer queues. In the embodiment of this application, there are two situations:
[0089] If the queue is from the downstream network interface to the upstream network interface, the following conditions are met for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero; 3) The token generated by the rate control module is obtained ;
[0090] If the queue does not go from the downstream network interface to the upstream network interface, the following conditions are met for forwarding: 1) The port sending buffer is not full; 2) The queue packet counter is greater than zero.
[0091] The code rate control module 208 is configured by the CPU module 204, and generates tokens for all the packet buffer queues from the downstream network interface to the upstream network interface within a programmable interval to control the upstream forwarding code rate.
[0092] The CPU module 304 is mainly responsible for the protocol processing with the node server, the configuration of the address table 306, and the configuration of the code rate control module 308.
[0093] Ethernet protocol conversion gateway :
[0094] Such as Figure 4 As shown, it mainly includes network interface modules (downlink network interface module 401, uplink network interface module 402), switching engine module 403, CPU module 404, packet detection module 405, code rate control module 408, address table 406, and packet buffer 407 And MAC adding module 409 and MAC deleting module 410.
[0095] Among them, the data packet coming in from the downstream network interface module 401 enters the packet detection module 405; the packet detection module 405 detects the Ethernet MAC DA, Ethernet MAC SA, Ethernet length or frame type, depending on the network destination address DA, and depending on the network The source address SA, depending on whether the network data packet type and packet length meet the requirements, if it meets the requirements, the corresponding stream identifier (stream-id) is allocated; then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte), and enter the corresponding receiving buffer, otherwise discard;
[0096] The downlink network interface module 401 detects the sending buffer of the port. If there is a packet, it learns the Ethernet MAC DA of the corresponding terminal according to the visual network destination address DA of the packet, and adds the Ethernet MAC DA of the terminal, the MACSA of the Ethernet protocol conversion gateway, and Ethernet length or frame type, and send.
[0097] The functions of other modules in the Ethernet protocol conversion gateway are similar to those of the access switch.
[0098] terminal:
[0099] Mainly include network interface module, business processing module and CPU module; for example, set-top box mainly includes network interface module, video and audio codec engine module, CPU module; encoding board mainly includes network interface module, video and audio code engine module, CPU module; memory Mainly include network interface module, CPU module and disk array module.
[0100] 1.3 The equipment of the metropolitan area network can be mainly divided into two categories: node servers, node switches, and metropolitan area servers. Among them, the node switch mainly includes a network interface module, a switching engine module, and a CPU module; the metropolitan area server mainly includes a network interface module, a switching engine module, and a CPU module.
[0101] 2. Depending on the definition of networked data packets
[0102] 2.1 Access network data packet definition
[0103] The data packet of the access network mainly includes the following parts: destination address (DA), source address (SA), reserved bytes, payload (PDU), CRC.
[0104] As shown in the following table, the data packet of the access network mainly includes the following parts:
[0105] DA
[0106] among them:
[0107] The destination address (DA) is composed of 8 bytes. The first byte indicates the type of data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.). There are up to 256 possibilities. The second byte to the sixth byte are the metropolitan area network address, and the seventh and eighth bytes are the access network address;
[0108] The source address (SA) is also composed of 8 bytes (byte), the definition is the same as the destination address (DA);
[0109] The reserved byte consists of 2 bytes;
[0110] The payload part has different lengths according to different types of data packets. If it is a variety of protocol packets, it is 64 bytes. If it is a unicast packet, it is 32+1024=1056 bytes. Of course, it is not limited to The above 2 kinds;
[0111] CRC consists of 4 bytes, and its calculation method follows the standard Ethernet CRC algorithm.
[0112] 2.2 Metropolitan area network data packet definition
[0113] The topology of the metropolitan area network is graphical. There may be two or more connections between two devices, that is, there may be more than two connections between node switches and node servers, node switches and node switches, and node switches and node servers. Kind of connection. However, the metropolitan area network address of the metropolitan area network device is unique. In order to accurately describe the connection relationship between metropolitan area network devices, a parameter: tag is introduced in the embodiment of this application to uniquely describe a metropolitan area network device.
[0114] The definition of the label in this manual is similar to that of MPLS (Multi-Protocol Label Switch, Multi-Protocol Label Switch). Assuming that there are two connections between device A and device B, then there will be data packets from device A to device B. 2 labels, there are also 2 labels for the data packet from device B to device A. Labels are divided into in-label and out-label. Assuming that the label (in label) of the data packet entering device A is 0x0000, the label (out label) of the data packet when it leaves device A may become 0x0001. The network access process of the metropolitan area network is a centralized control of the access process, which means that the address allocation and label allocation of the metropolitan area network are all dominated by the metropolitan area server, and the node switches and node servers are all passively executed. It is different from MPLS label distribution, which is the result of mutual negotiation between switches and servers.
[0115] As shown in the following table, the data packet of the metropolitan area network mainly includes the following parts:
[0116] DA
[0117] Namely, destination address (DA), source address (SA), reserved byte (Reserved), tag, payload (PDU), CRC. Among them, the format of the tag can refer to the following definition: the tag is 32bit, the high 16bit is reserved, and only the low 16bit is used, and its position is between the reserved byte of the data packet and the payload.
[0118] Based on the above-mentioned characteristics of video networking, the call solution of the embodiment of the present application is proposed. Following the video networking protocol, a call connection is established between the first terminal and the second terminal that sends an alarm command, which can reduce losses in an emergency.
[0119] Reference Figure 5 , Shows a flow chart of the steps of a call method provided by an embodiment of the present application. The call method can be applied to video networking, which can include a GIS application platform, a first terminal, and a second terminal.
[0120] Among them, the GIS application platform is the networked GIS Tianyan dispatching application platform. The terminal equipment can be located through the GIS application platform, and the terminal equipment number can be read and inquired at will.
[0121] The call method provided in this embodiment may include the following steps:
[0122] Step 501: When an emergency event is detected, the first terminal sends an alarm instruction to the GIS application platform.
[0123] When the first terminal detects that an emergency event has occurred or is occurring, it can send an alarm instruction to the GIS application platform based on the visual network. Wherein, the video networking software is installed on the first terminal (for example, a mobile phone), and the user can connect to the video networking through the video networking software to obtain video services, such as video calls, video conferences, and live video broadcasts. At the same time, the visual networking software has a report function, and users can send alarm instructions to the GIS application platform by clicking the alarm button on the visual networking software.
[0124] Specifically, the first terminal may send an alarm instruction to the message transponder based on the visual network, and then send it to the GIS application platform based on the visual network after being converted by the message transponder. Wherein, the alarm instruction contains at least the point information of the first terminal.
[0125] Step 502, the GIS application platform sends a videophone request to the second terminal according to the alarm instruction.
[0126] Specifically, the GIS application platform can obtain the location information of the first terminal according to the alarm instruction, and can send a videophone request to the second terminal according to the location information of the first terminal. Wherein, the second terminal may be, for example, a terminal device bound to a GIS application platform. The videophone request includes at least the number information of the first terminal.
[0127] Step 503: The second terminal establishes a call connection with the first terminal according to the videophone request.
[0128] Specifically, after receiving the videophone request, the second terminal can extract the number information of the first terminal according to the videophone request, and request the video service from the first terminal to establish a call. The first terminal can broadcast Notify the on-site personnel to evacuate.
[0129] In practical applications, the video network may also include a core server, and the first terminal and the second terminal may establish a video or audio call through the core server.
[0130] Using the technical solution provided by this embodiment, it is possible to establish a call connection between the first terminal that sends the alarm command and the second terminal, so that the first terminal and the second terminal can conduct real-time video or audio communication, in case of fire, flood, etc. In the event of an emergency, the commander can command the on-site personnel according to the actual situation on the site, and the site can also conduct emergency evacuation or evacuation of personnel through broadcasting, thereby reducing losses.
[0131] In order to improve the communication quality of the entire communication link of the video network and remove the noise of the transmitted data, refer to Picture 9 , Depending on the network can also include message forwarders, refer to Image 6 , The foregoing step 501 may specifically include:
[0132] Step 601: The first terminal sends an alarm instruction to the message repeater.
[0133] Specifically, when an emergency event such as an earthquake or a fire is encountered, the first terminal sends an alarm instruction to the message transponder based on the video network.
[0134] Step 602: The message transponder converts the alarm instruction into an alarm instruction conforming to the standard protocol of the visual network, and sends it to the GIS application platform.
[0135] Specifically, the message transponder converts the alarm command into an alarm command that conforms to the standard protocol of the video network. During the conversion process, the signal can be demodulated and regenerated to remove the noise of the transmitted data, thereby improving the communication of the entire communication link of the video network. quality.
[0136] Then, the message transponder sends an alarm command that conforms to the standard protocol of the visual network to the GIS application platform based on the visual network.
[0137] In one implementation, refer to Figure 7 , The foregoing step 502 may specifically include:
[0138] Step 701: Determine the location information of the first terminal according to the alarm instruction.
[0139] Specifically, the GIS application platform can locate the first terminal according to the alarm instruction, and obtain the location information of the first terminal.
[0140] Step 702: Generate a videophone request based on the point information.
[0141] Specifically, the GIS application platform may generate a videophone request corresponding to the point information according to the point information, and the videophone request includes at least the number information of the first terminal obtained according to the point information.
[0142] Step 703: Send a videophone request to the second terminal.
[0143] Specifically, the GIS application platform can first send a videophone request to the message forwarder, refer to Picture 9 After being converted and processed by the message forwarder, the message forwarder sends the processed videophone request to the second terminal.
[0144] In one implementation, refer to Picture 8 , The foregoing step 503 may specifically include:
[0145] Step 801: The second terminal obtains the number information of the first terminal according to the videophone request.
[0146] Specifically, the second terminal can obtain the number information of the first terminal by analyzing the videophone request.
[0147] Step 802: According to the number information, the second terminal establishes a call connection with the first terminal.
[0148] Specifically, for example, a downlink communication link between the first terminal and the second terminal may be established based on the number information; based on the downlink communication link, the second terminal establishes a call connection with the first terminal.
[0149] In practical applications, the network is regarded as a network with centralized control functions, including a main control server and lower-level network equipment. The lower-level network equipment includes a terminal. One of the core concepts of the visual network is that the main control server informs the switching equipment of the current situation. The downlink communication link of the secondary service is configured with a table, and then data packets are transmitted based on the configured table.
[0150] The communication methods in video networking include:
[0151] The main control server configures the downlink communication link of the current service;
[0152] The data packet of the current service sent by the source terminal (such as the second terminal) is transmitted to the target terminal (such as the first terminal) according to the downlink communication link.
[0153] In the embodiment of the present application, configuring the downlink communication link of the current service includes: notifying the switching equipment involved in the downlink communication link of the current service to configure a meter;
[0154] Furthermore, transmitting according to the downlink communication link includes: querying the configured table, and the switching device transmits the received data packet through the corresponding port.
[0155] In a specific implementation, services include unicast communication services and multicast communication services. That is to say, whether it is multicast communication or unicast communication, the core concept of meter allocation can be used to realize communication in visual networking.
[0156] As mentioned above, the visual networking network includes the access network part. In the access network, the main control server is a node server, and the lower-level network equipment includes an access switch and a terminal.
[0157] For the unicast communication service in the access network, the step of configuring the downlink communication link of the current service by the master control server may include the following steps:
[0158] Sub-step S11, the main control server obtains the downlink communication link information of the current service according to the service request protocol package initiated by the source terminal. The downlink communication link information includes downlink communication between the main control server participating in the current service and the access switch Port information;
[0159] Sub-step S12, the main control server sets the downstream port directed by the data packet of the current service in its internal data packet address table according to the downstream communication port information of the control server; and according to the downstream communication port information of the access switch, The corresponding access switch sends the port configuration command;
[0160] In sub-step S13, the access switch sets the downstream port to which the data packet of the current service is directed in its internal data packet address table according to the port configuration command.
[0161] For the multicast communication service (such as emergency command) in the access network, the step of obtaining the downlink communication link information of the current service by the master control server may include the following sub-steps:
[0162] In sub-step S21, the master control server obtains a service request protocol package for applying for a multicast communication service initiated by the target terminal. The service request protocol package includes service type information, service content information, and access network address of the target terminal; wherein, the service content information Includes the service number;
[0163] Sub-step S22, the main control server extracts the access network address of the source terminal from the preset content-address mapping table according to the service number;
[0164] Sub-step S23, the master control server obtains the multicast address corresponding to the source terminal and assigns it to the target terminal; and, according to the service type information, the access network addresses of the source terminal and the target terminal, obtain the communication link of the current multicast service information.
[0165] It should be noted that for the method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should know that the embodiments of this application are not limited by the described sequence of actions, because According to the embodiments of the present application, certain steps may be performed in other order or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of this application.
[0166] The call device provided in another embodiment of the present application can be applied to a video network, and the video network includes a GIS application platform, a first terminal, and a second terminal. Among them, the GIS application platform is the networked GIS Tianyan dispatching application platform. The terminal equipment can be located through the GIS application platform, and the terminal equipment number can be read and inquired at will.
[0167] Reference Picture 10 Shows a structural block diagram of a communication device provided by an embodiment of the present application. The communication device of the embodiment of the present application may include the following modules:
[0168] The alarm module 1001 is configured to send an alarm instruction to the GIS application platform when an emergency event is detected by the first terminal;
[0169] The request module 1002 is used for the GIS application platform to send a videophone request to the second terminal according to the alarm instruction;
[0170] The call module 1003 is used for the second terminal to establish a call connection with the first terminal according to the videophone request.
[0171] When it is detected that an emergency event has occurred or is occurring, the alarm module 1001 provided in the first terminal can send an alarm instruction to the GIS application platform based on the visual network. Wherein, the video networking software is installed on the first terminal (for example, a mobile phone), and the user can connect to the video networking through the video networking software to obtain video services, such as video calls, video conferences, and live video broadcasts. At the same time, the visual networking software has a report function, and users can send alarm instructions to the GIS application platform by clicking the alarm button on the visual networking software.
[0172] Specifically, the alarm module 1001 may send an alarm instruction to the message transponder based on the visual network, which is converted by the message transponder and then sent to the GIS application platform based on the visual network. Wherein, the alarm instruction contains at least the point information of the first terminal.
[0173] The request module 1002 can be set in the GIS application platform; the request module 1002 can obtain the location information of the first terminal according to the alarm instruction, and can send the videophone request to the second terminal according to the location information of the first terminal. Wherein, the second terminal may be, for example, a terminal device bound to a GIS application platform. The videophone request includes at least the number information of the first terminal.
[0174] The call module 1003 can be set in the second terminal. After receiving the videophone request, the call module 1003 can extract the number information of the first terminal according to the videophone request, and request the video service from the first terminal to establish a call. The first terminal can notify the on-site personnel to evacuate in the form of broadcast.
[0175] In practical applications, the video network may also include a core server, and the first terminal and the second terminal may establish a video or audio call through the core server.
[0176] In an optional implementation manner, the video network further includes a message repeater, and the alarm module 1001 may specifically include:
[0177] A sending unit, configured to send an alarm instruction to the message repeater by the first terminal;
[0178] The conversion unit is used for the message transponder to convert the alarm instruction into an alarm instruction complying with the visual networking standard protocol and send it to the GIS application platform.
[0179] In an optional implementation manner, the request module 1002 may specifically include:
[0180] A point determining unit, configured to determine point information of the first terminal according to the alarm instruction;
[0181] The request generating unit is configured to generate a videophone request according to the point information;
[0182] The request sending unit is configured to send the videophone request to the second terminal.
[0183] In an optional implementation manner, the call module 1003 may specifically include:
[0184] A number obtaining unit, configured for the second terminal to obtain the number information of the first terminal according to the videophone request;
[0185] The call establishment unit is configured to establish a call connection between the second terminal and the first terminal according to the number information.
[0186] Specifically, the call establishment unit is further used for:
[0187] Establishing a downlink communication link between the first terminal and the second terminal according to the number information;
[0188] Based on the downlink communication link, the second terminal establishes a call connection with the first terminal.
[0189] The call device provided by the embodiment of the application can establish a call connection between the first terminal that sends the alarm instruction and the second terminal, so that the first terminal and the second terminal can conduct real-time video or audio communication, in case of fire, flood, etc. In the event of an emergency, the commander can command the on-site personnel according to the actual situation on the site, and the site can also conduct emergency evacuation or evacuation of personnel through broadcasting, thereby reducing losses.
[0190] As for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
[0191] The various embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
[0192] Those skilled in the art should understand that the embodiments of the embodiments of the present application may be provided as methods, devices, or computer program products. Therefore, the embodiments of the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
[0193] The embodiments of this application are described with reference to the flowcharts and/or block diagrams of the methods, terminal devices (systems), and computer program products according to the embodiments of this application. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be implemented by computer program instructions. These computer program instructions can be provided to the processors of general-purpose computers, special-purpose computers, embedded processors, or other programmable data processing terminal equipment to generate a machine, so that instructions executed by the processor of the computer or other programmable data processing terminal equipment Generated for implementation in the process Figure one Process or multiple processes and/or boxes Figure one A device with functions specified in a block or multiple blocks.
[0194] These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing terminal equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device. The instruction device is implemented in the process Figure one Process or multiple processes and/or boxes Figure one Functions specified in a box or multiple boxes.
[0195] These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operation steps are performed on the computer or other programmable terminal equipment to produce computer-implemented processing, so that the computer or other programmable terminal equipment The instructions executed on the Figure one Process or multiple processes and/or boxes Figure one Steps of functions specified in a box or multiple boxes.
[0196] Although the preferred embodiments of the embodiments of the present application have been described, those skilled in the art can make additional changes and modifications to these embodiments once they learn the basic creative concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications falling within the scope of the embodiments of the present application.
[0197] Finally, it should be noted that in this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities Or there is any such actual relationship or sequence between operations. Moreover, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or terminal device including a series of elements not only includes those elements, but also includes those that are not explicitly listed. Other elements listed, or also include elements inherent to this process, method, article or terminal device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other same elements in the process, method, article or terminal device that includes the element.
[0198] The above provides a detailed introduction to a call method and a call device provided by the present application. Specific examples are used in this article to illustrate the principles and implementation of the present application. The description of the above embodiments is only used to help understand the present application. The method of application and its core idea; meanwhile, for those of ordinary skill in the art, according to the idea of this application, there will be changes in the specific implementation and scope of application. In summary, the content of this specification should not be understood It is a restriction on this application.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.