52nd week of 2019 patent applcation highlights part 42 |
Patent application number | Title | Published |
20190391763 | WRITE LEVELING A MEMORY DEVICE - A host device and memory device function together to perform internal write leveling of a data strobe with a write command within the memory device. The memory device includes a command interface configured to receive write commands from the host device. The memory device also includes an input-output interface configured to receive the data strobe from the host device. The memory device also includes internal write circuitry configured to launch an internal write signal based at least in part on the write commands. The launch of the internal write signal is based at least in part on an indication from the host device that indicates when to launch the internal write signal relative to a cas write latency (CWL) for the memory device. | 2019-12-26 |
20190391764 | DYNAMIC MEMORY TRAFFIC OPTIMIZATION IN MULTI-CLIENT SYSTEMS - Systems, apparatuses, and methods for dynamically optimizing memory traffic in multi-client systems are disclosed. A system includes a plurality of client devices, a memory subsystem, and a communication fabric coupled to the client devices and the memory subsystem. The system includes a first client which generates memory access requests targeting the memory subsystem. Prior to sending a given memory access request to the fabric, the first client analyzes metadata associated with data targeted by the given memory access request. If the metadata indicates the targeted data is the same as or is able to be derived from previously retrieved data, the first client prevents the request from being sent out on the fabric on the data path to memory subsystem. This helps to reduce memory bandwidth consumption and allows the fabric and the memory subsystem to stay in a low-power state for longer periods of time. | 2019-12-26 |
20190391765 | PRINTING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - The present disclosure enables to display a paper-out error immediately in the middle of printing images of scanner-read originals. A control method for a printing apparatus that includes a scanner for reading originals, a printer for printing images of the originals read by the scanner, and a controller for selecting a printing paper storage unit storing printing paper on which the images are to be printed, the control method including counting a number of the originals, calculating, every time the number of the originals is counted, a number of printing paper sheets for printing the images of the read originals based on the counted number of the originals, and displaying an error message when the calculated number exceeds the number of printing paper sheets stored in the selected printing paper storage unit. | 2019-12-26 |
20190391766 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - There is provided an information processing apparatus comprising a memory for storing one or more programs and at least one processor that executes the programs. The information processing apparatus functions to set a page count to be printed to one sheet and determines whether an object has moved in a first direction or a second direction that is a reverse direction to the first direction. If the object has moved in the first direction, reset the page count to increase the set page count, and if the object has moved in the second direction, reset the set page count to decrease the set page count. The information processing apparatus determines whether a conflict relationship is occurring between the reset page count and another print setting, and changes the another print setting if the conflict relationship is occurring. | 2019-12-26 |
20190391767 | IMAGE FORMING APPARATUS - An image forming apparatus includes a controller and a reservation job managing unit. The controller is configured to perform a print job or a transmission job using a printing device or a communication device. The reservation job managing unit is configured to (a) register schedule data and job data of a reservation job that is a print job or a transmission job in a predetermined storage device, (b) determine whether the job data is stored in the storage device or not when a reservation time has come on the basis of the schedule data, and (c) notify a user of that the job data is not stored in the storage device if the job data is not stored in the storage device, and afterward cause the controller to perform the reservation job if the job data is restored in the storage device. | 2019-12-26 |
20190391768 | OUTPUT BINS WITH ADJUSTABLE OFFSET POSITIONS - An example of apparatus to adjust an offset is provided. The apparatus also includes a printing device to generate a plurality of print jobs. The apparatus also includes an output bin to catch the plurality of print jobs. The apparatus includes a motor to move the output bin, wherein the output bin alternates between a first position and a second position between each print job of the plurality of print jobs. The apparatus also includes a controller to control the motor, wherein the controller is to update the position data based on the sensor data to provide uninterrupted operation of the printing device after the sensor detects the obstacle. | 2019-12-26 |
20190391769 | REMOTE MANAGEMENT SYSTEM AND INFORMATION PROCESSING METHOD - A remote maintenance server includes a processor that operates as a remote panel function start instruction receiving unit that receives an instruction to start a remote panel function from a user via the user terminal and the user operation server, a relay server determining unit that determines the relay server to be used when executing the remote panel function, a verification information writing unit that writes verification information in the cache server, the verification information being used when the relay server relays connection, and a connection command sending unit that sends a connection command to the image forming apparatus that executes the remote panel function via the connection server, the connection command instructing to connect to the relay server to be used. | 2019-12-26 |
20190391770 | IMAGE PROCESSING APPARATUS, METHOD, AND COMPUTER-READABLE MEDIUM FOR REDUCING TIME REQUIRED UNTIL COMPLETING OUTPUT PROCESS AFTER SUCCESSFUL AUTHENTICATION - An image processing apparatus includes a print engine, a communication interface, a memory, and a controller configured to receive a print job via the communication interface, acquire authentication information associated with the print job, perform authentication based on the acquired authentication information, and determine whether the authentication is successful, when determining that the authentication is successful, cause the print engine to print, on a sheet, an image based on the print job, and regardless of whether the authentication is successful, transmit predetermined image data based on the print job via the communication interface. | 2019-12-26 |
20190391771 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR CONTROLLING THE INFORMATION PROCESSING APPARATUS IN A MAINTENANCE MODE, AND STORAGE MEDIUM - In an information processing apparatus and a method of controlling the same, settings for prohibiting an access to a removable medium is performed, and even if the setting is set, the access to the removable medium is permitted in a case where the information processing apparatus is activated in the maintenance mode. | 2019-12-26 |
20190391772 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing apparatus includes an acquiring unit that acquires operation information indicating an operation of a user, a first specifying unit that specifies an application corresponding to the operation information among applications not used by the user, a second specifying unit that specifies a terminal through which the application specified by the first specifying unit is capable of being used, and a transmitting unit that transmits application information indicating the application and terminal information indicating the terminal, to a terminal operated by the user. | 2019-12-26 |
20190391773 | INFORMATION PROCESSING APPARATUS, IMAGE FORMING APPARATUS, DISPLAY CONTROL METHOD, AND DISPLAY CONTROL PROGRAM - An image forming apparatus | 2019-12-26 |
20190391774 | SERVER APPARATUS, INFORMATION PROCESSING SYSTEM, AND IMAGE FORMING APPARATUS - [Object] To be capable of further reducing time to obtain data from a server apparatus and form an image by an image forming apparatus. | 2019-12-26 |
20190391775 | PRINT-JOB GROUPING APPARATUS, PRINT-JOB PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A print-job grouping apparatus includes a grouping unit that generates multiple group jobs by performing grouping based on multiple input print jobs in accordance with a grouping condition including a quality condition. Each group job is processed by a printer selected from multiple printers. The quality condition is a condition other than a general condition including a printing condition and is a special condition designated by a client requesting processing of each print job. | 2019-12-26 |
20190391776 | SYSTEMS AND METHODS FOR RE-ORDERING QUEUED PRINT JOBS - The present disclosure discloses systems and methods for re-ordering queued print jobs. The method includes executing one or more already received print jobs listed in a print queue of a printer. Then, a new print job is received in the print queue of the printer. A message is displayed to the user with an option to re-order the new print job, via a user interface. Based on the user input, the new print job is re-ordered by moving the new print job to the top of the print queue of the printer. Finally, the new print job is printed first followed by printing the one or more already received print jobs. | 2019-12-26 |
20190391777 | Method and Apparatus for Implementing Content Displaying of Component - A method for content displaying of a component includes displaying, on a terminal screen, a first display interface of a component; acquiring a first display instruction; acquiring a second display interface of the component according to the first display instruction; and displaying, on the terminal screen, the second display interface of the component, where the second display interface includes the first display interface. | 2019-12-26 |
20190391778 | APPARATUS, SYSTEM, AND METHOD FOR CONTROLLING DISPLAY, AND RECORDING MEDIUM - An apparatus, system, and method for controlling a display, each of which: controls the display to display an image of a predetermined area of a first image, the first image being superimposed with a second image; determines whether the second image is viewed by a user; and switches a display of the second image between a first display in which the second image is displayed as a still image and a second display in which the second image is displayed as a moving image, based on a determination result indicating whether the second image is viewed by the user. | 2019-12-26 |
20190391779 | CROSS DEVICE DISPLAY SYNCHRONIZATION - Systems and methods for cross device display synchronization using state data. A second identifier uniquely identifying a second device that is represented on a second display of a second device is obtained at a first device with a first display and a first identifier uniquely identifying the first device. An association is created between the first identifier and the second identifier at a real-time database. User interface (UI) state data defining a first UI state of a UI presented on the first display is submitted from the first device to the real-time database via a first network thereby creating replica UI state data on the real-time database. The real-time database pushes the UI state data to the second device via a second network based on the association between the first identifier and the second identifier thereby causing the second device to synchronize a corresponding UI presented on the second display. | 2019-12-26 |
20190391780 | DISPLAY METHOD AND DEVICE - A display method and display apparatus. The method includes: receiving a display request sent by a vehicle-mounted terminal, the display request including an application identifier of a target application installed on a mobile terminal; in response to the received display request, determining whether the target application is in a running state, and arranging a storage space for the target application; in response to the target application being in a running state, starting to draw an interface of the running target application, and storing, in the storage space, interface data obtained by drawing; performing video encoding on the stored interface data to generate a video stream; and sending the generated video stream to the vehicle-mounted terminal so that the vehicle-mounted terminal displays the interface of the target application according to the video stream. | 2019-12-26 |
20190391781 | NFC-ENABLED APPARATUS AND METHOD OF OPERATION THEREOF - An NFC-enabled apparatus is disclosed. The apparatus includes a touch screen display and a near field communication (NFC) module comprising an NFC antenna and an NFC controller. In response to tagging between the NFC-enabled apparatus and the external NFC terminal, an NFC communication channel is established between the NFC-enabled apparatus and the external NFC terminal for data communication therebetween. | 2019-12-26 |
20190391782 | METHOD AND SYSTEM OF PROCESSING AN AUDIO RECORDING FOR FACILITATING PRODUCTION OF COMPETITIVELY LOUD MASTERED AUDIO RECORDING - Disclosed is a method of processing an audio recording for facilitating production of competitively loud mastered audio recording with reduced distortion. The method includes receiving an audio file comprising the audio. Further, the method includes providing a first attenuation to the audio recording to produce a first attenuated audio recording and routing the first attenuated audio recording onto an input of a first bus. Further, the method includes providing a second attenuation to the first attenuated audio recording available to generate a second attenuated audio recording and routing the second attenuated audio recording onto an input of a second bus. Further, the method includes providing a third attenuation to the second attenuated audio recording to generate a third attenuated audio recording. Yet further, the method includes processing the third attenuated audio recording to generate a track output. Moreover, the method includes transmitting the track output to the electronic device. | 2019-12-26 |
20190391783 | Sound Adaptation Based on Content and Context - An electronic device that dynamically adapts sound based at least in part on content and context is described. The electronic device may acquire information about an environment, which may include the second electronic device. Based at least in part on the information, the electronic device may determine a context associated with the environment (such as a number of individuals in the environment, a type of lighting in the environment, a time or a timestamp, a location, etc.). Then, based at least in part on the determined context and a characteristic of audio content (such as a type of music), the electronic device may calculate an acoustic radiation pattern. Next, the electronic device may provide the audio content and second information specifying the acoustic radiation pattern for the second electronic device. | 2019-12-26 |
20190391784 | CALL VISUALIZATION - Merchant/consumer calls may be recorded and evaluated according to a variety of criteria. The call recordings and analyses thereof, as well as consumer tracking information, may be displayed in a user interface of a web-based online portal for convenience in evaluating the use and efficacy of marketing channels as well as the quality of merchant/consumer interactions. In an aspect, the user interface provides a representation of a variety of telephone calls as an interactive keyword cloud that presents business-value-specific keywords targeted for detection during such telephone calls. The keyword cloud may depict keywords in a range of colors, sizes, and relative positioning to connote varied degrees of significance, such as a relative rate of occurrence of keywords in the represented telephone calls. Each keyword in the keyword cloud may contain a hyperlink to related content such as a listing of telephone calls containing the keyword. | 2019-12-26 |
20190391785 | SYSTEMS AND METHODS FOR GENERATING A GRAPHICAL REPRESENTATION OF AUDIO SIGNAL DATA DURING TIME COMPRESSION OR EXPANSION - Systems and methods for generating a graphical representation of audio signal data during time compression or expansion are provided. The system may include a processor that performs a method including displaying a waveform during audio-signal playback at a first speed by scrolling the waveform from aright portion of a display to a left portion of the display. The method includes receiving a command to increase or decrease the audio-signal playback speed and horizontally expanding or horizontally contracting the waveform in response to receiving the command to increase or decrease the audio-signal playback speed. | 2019-12-26 |
20190391786 | SYSTEMS AND METHODS TO OPTIMIZE MUSIC PLAY IN A SCROLLING NEWS FEED - Systems, methods, and non-transitory computer readable media are configured to receive metadata for audio content associated with an audio content item for presentation in a news feed to be displayed on a screen of a computing device associated with a user. The metadata is transformed for display in the audio content item. The transformed metadata is displayed in the audio content item. In addition, systems, methods, and non-transitory computer readable media are configured to present an audio content item in a news feed to be displayed on a screen of a computing device associated with a user. An input by the user for scrolling the news feed and the audio content item on the screen is received. A pop out player is presented in response to disappearance of the audio content item from the screen based on the scrolling. | 2019-12-26 |
20190391787 | Media Sharing Community - The present invention enables a user to share his/her listening experience selectively with others without sharing headphones and without disturbing others who do not want to listen. In a preferred embodiment, a first listener can accomplish this by storing in a Portable Electronic Device or similar device a library of listening experiences, listening to one of the listening experiences, and while listening to that one listening experience streaming the one listening experience to at least one other Portable Electronic Device or similar device. A second listener at the other Portable Electronic Device can then listen to the same listening experience as the first listener at the same time. It is expected that the listening experiences will typically be songs or other music but the invention may be practiced with any type of audio content. The first listener may also create a playlist of the listening experiences in the library and make the playlist available to others. Others may use the playlist to access the library and listen to one or more listening experiences stored in the library. Also, utilizing the same interface and communication methodologies as described above, the technology platform detailed in this application can be used for commercial purposes to stream location based content, audio and otherwise, to a connected network of Portable Electronic Devices. Commercial uses of this functionality include providing commercial establishments with the ability to create synchronous (users come into a stream at the exact point that it is being streamed in real time) and or asynchronous (users can select and start a transmission from the beginning) featured channels (location based) where they can stream any self created or otherwise authorized content to other Portable Electronic Devices in their range. | 2019-12-26 |
20190391788 | SYSTEMS AND METHODS FOR SWITCHING OPERATIONAL MODES BASED ON AUDIO TRIGGERS - Systems and methods are provided for enabling different modes of operation based on a detected audio trigger. The systems and methods may generate an audio signature for a detected first sound and compare the audio signature with a plurality of registered audio signatures. In response to determining that the audio signature matches a first registered audio signature, the systems and methods may enable a first operational mode for a device that enables a first plurality of commands. In response to determining that the audio signature matches a second registered audio signature, the systems and methods may enable a second operational mode for a device that enables a second plurality of commands, where the second plurality of commands are different from the first plurality of commands. | 2019-12-26 |
20190391789 | MULTIPLICATION OPERATIONS IN MEMORY - Examples of the present disclosure provide apparatuses and methods for performing multi-variable bit-length multiplication operations in a memory. An example method comprises performing a multiplication operation on a first vector and a second vector. The first vector includes a number of first elements stored in a group of memory cells coupled to a first access line and a number of sense lines of a memory array. The second vector includes a number of second elements stored in a group of memory cells coupled to a second access line and the number of sense lines of the memory array. The example multiplication operation can include performing a number of AND operations, OR operations and SHIFT operations without transferring data via an input/output (I/O) line. | 2019-12-26 |
20190391790 | PRE-COMPILER, SYSTEM FOR DEVELOPING A PROGRAM, PRE-COMPILATION METHOD AND CORRESPONDING COMPUTER PROGRAM - A pre-compiler is arranged to analyze a source code including main code written in a main computer language and at least one block of code written in another computer language embedded in the main code, and replace each block of embedded code by replacement code in the main computer language. The pre-compiler is further arranged to search, in the block(s) of embedded code, for an expression including a main variable intended to contain a string of characters, an indicator variable intended to contain an integer, and one or more predefined characters for association of the main and indicator variables, and to replace each expression found by code in the main computer language providing the N first characters of the first variable, N being the value of the indicator variable. | 2019-12-26 |
20190391791 | ACCELERATION TECHNIQUES FOR GRAPH ANALYSIS PROGRAMS - Source code of a graph analysis program expressed in a platform-independent language which supports linear algebra primitives is obtained. An executable version of the program is generated, which includes an invocation of a function of a parallel programming library optimized for a particular hardware platform. A result of executing the program is stored. | 2019-12-26 |
20190391792 | CODE REUSABILITY - Disclosed is a system for facilitating reusability of a code snippet during development of a software application. Initially, a plurality of tokens is extracted, by using an Artificial Intelligence (AI) based syntactic analysis, from a sequence of lines of code entered by a developer. Further, each token of the plurality of tokens is converted into a vector by using a neural word embedding technique. Subsequently, a context of the plurality of tokens is determined by using a deep autoencoder neural network technique. Furthermore, at least one code snippet is recommended from a plurality of code snippets corresponding to the context. To do so, the context is compared with a plurality of contexts by using a Deep Recurrent Neural Network (Deep RNN) technique. Upon comparison, a confidence score is computed for each code snippet. Finally, the at least one code snippet is selected based on the confidence score. | 2019-12-26 |
20190391793 | SEPARATION OF USER INTERFACE LOGIC FROM USER INTERFACE PRESENTATION BY USING A PROTOCOL - A single presentation logic that is independent of a user interface framework is provided. Also provided is a protocol to interface the single presentation logic to the user interface framework. A plurality of user interfaces is configured to be plugged to the single presentation logic. | 2019-12-26 |
20190391794 | SYSTEM FOR DEVELOPING A PROGRAM INTENDED TO COMMUNICATE WITH A SET OF AT LEAST ONE DATABASE - A system includes an interface; a software library; a pre-compiler adapted to replace each embedded SQL instruction with replacement code in COBOL language including a call to a routine of the software library passing to the routine an instruction issuing from the embedded SQL instruction. The routine is adapted to send to the interface a message including the instruction issuing from the embedded SQL instruction. The interface is a server intended to be connected to a computer network to which the computer device is connected and the routine of which is a client, the interface being designed to relay, as a SQL instruction, the instruction of at least one of the received message(s) to at least one database in the set. | 2019-12-26 |
20190391795 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE RECORDING MEDIUM STORING THEREIN COMPILER PROGRAM, AND COMPILING METHOD - An information processing apparatus includes a memory; and a processor coupled to the memory and the processor configured to when source code includes an instruction for storing units of data in an area of an N-dimensional variable-length array (N being an integer and a value of N being equal to or greater than 2), generate object code in the memory to cause the units of data to be stored in an area of an N-dimensional fixed-length array instead of the area of the N-dimensional variable-length array, and when the source code includes an instruction for successively accessing the unit of data stored in the area of the N-dimensional variable-length array, generate the object code in the memory to cause the units of data stored in the area of the N-dimensional fixed-length array to be stored contiguously in an area of a one-dimensional fixed-length array. | 2019-12-26 |
20190391796 | CONTROL OF SCHEDULING DEPENDENCIES BY A NEURAL NETWORK COMPILER - A compiler receives a graph describing a neural network and accesses data to describe a target computing device to implement the neural network. The compiler generates an intermediate representation from the graph and the data, and determines dependencies between operations identified in the intermediate representation. A set of barrier tasks are determined to be performed to control flow of the set of operations based on the dependencies, where the set of barrier tasks are to be performed using hardware barrier components on the target computing device. Indications of the barrier tasks are inserted into the intermediate representation. The compiler generates a binary executable from the intermediate representation to enable performance of the barrier tasks to control performance of the set of operations at the target computing device. | 2019-12-26 |
20190391797 | SYSTEMS AND/OR METHODS FOR TYPE INFERENCE FROM MACHINE CODE - Systems, methods and computer readable medium described herein relate to techniques for automatic type inference from machine code. An example technique includes receiving a machine code of a program, generating an intermediate representation of the machine code, generating a plurality of type constraints from the intermediate representation, generating one or more inferred types based at least upon the plurality of type constraints, converting the generated inferred types to C types, updating the intermediate representation by applying the inferred types to the intermediate representation, and outputting said inferred types, said converted C types, and/or at least a portion of the updated intermediate representation. | 2019-12-26 |
20190391798 | REDUCING OVERHEAD OF SOFTWARE DEPLOYMENT BASED ON EXISTING DEPLOYMENT OCCURRENCES - Methods and systems for deploying software applications based on previous deployments. One method includes collecting first telemetry data tracking usage of a first plurality of features of a first software application by a first plurality of devices and creating a first plurality of mappings based on the first telemetry data. The method further includes, as part of deploying the first software application within an organization, collecting second telemetry data tracking usage of a second plurality of features of a second software application by a second plurality of devices of the organization, creating a second plurality of mappings based on the second telemetry data, determining a set of features to be included in a testing plan relating to the first software application based on the first plurality of mappings and the second plurality of mappings, and implementing the testing plan as part of deploying the first software application within the organization. | 2019-12-26 |
20190391799 | Apparatus and Method to Execute Prerequisite Code Before Delivering UEFI Firmware Capsule - A method includes creating, by system firmware at an information handling system, a virtual Advanced Configuration and Power Interface (ACPI) bus device. A management service event is registered by a bus device driver corresponding to the virtual ACPI bus device. The management service event, when executed, determines whether a target device is in a condition to receive revised firmware. | 2019-12-26 |
20190391800 | OVER-THE-AIR (OTA) MOBILITY SERVICES PLATFORM - An over-the-air (OTA) mobility service platform (MSP) is disclosed that provides a variety of OTA services, including but not limited to: updating software OTA (SOTA), updating firmware OTA (FOTA), client connectivity, remote control and operation monitoring. In some exemplary embodiments, the MSP is a distributed computing platform that delivers and/or updates one or more of configuration data, rules, scripts and other services to vehicles and IoT devices. In some exemplary embodiments, the MSP optionally provides data ingestion, storage and management, data analytics, real-time data processing, remote control of data retrieving, insurance fraud verification, predictive maintenance and social media support. | 2019-12-26 |
20190391801 | CONSTRUCTION MACHINE - A program rewriting device for a construction machine rewrites a program of an on-board controller is provided with a mode determination section that determines whether the operation mode of the construction machine is a maintenance mode in which the program can be rewritten and the operation of an actuator is prevented, a preparation mode in which the rewriting of the program and the operation of the actuator are prevented, or a work mode in which the rewriting of the program is prevented and the operation of the actuator is permitted, an urgency degree determination section determines the degree of urgency for rewriting of an update program, a rewriting execution section that, at the time of rewriting the program, when it has been determined that the operation mode is the preparation mode and the degree of urgency is high, rewrites the program by switching the operation mode to the maintenance mode. | 2019-12-26 |
20190391802 | OVER-THE-AIR (OTA) UPDATE FOR FIRMWARE OF A VEHICLE COMPONENT - Executable code is part of an over-the-air (OTA) update received by, for example, a computing device in a vehicle. In one example, the update is a secure over-the-air (SOTA) update of software that is stored in firmware of a vehicle component (e.g., firmware stored in memory of a storage device or a boot device that are mounted in a vehicle). | 2019-12-26 |
20190391803 | APPLICATION HOT DEPLOY METHOD TO GUARENTEE APPLICATION VERSION CONSISTENCY AND COMPUTER PROGRAM STORED IN COMPUTER READABLE MEDIUM THERFOR - According to an exemplary embodiment of the present disclosure, disclosed is a method for seamless application version management in a system including a plurality of application servers. Procedures stored in a computer program for processing the above-mentioned method include: transmitting held application version information to an application management server, receiving an updated version of an application file and version information corresponding to the application file from the application management server; determining that it is possible to perform a service using the updated version of an application by loading the updated version of the application file; transmitting application update readiness information to the application management server when it is determined that it is possible to perform the service using the updated version of the application; and receiving a command to apply the updated version from the application management server. | 2019-12-26 |
20190391804 | ODATA/CRUD ENABLED SOLUTION FRAMEWORK - A method includes defining a solution framework from a plurality of preconfigured components for developing a web service, defining a preliminary data model describing business objects associated with the web service, adding code that defines specific logic tasks associated with the web service, and deploying the web service utilizing the solution framework in a web server. | 2019-12-26 |
20190391805 | SOFTWARE CHANGE TRACKING AND MANAGEMENT - Systems and methods for software tracking and management are disclosed. In embodiments, a computer-implemented method comprises: receiving, by a computing device, build output code from one or more user computer devices via a network, wherein the build output code is generated in response to a software build; automatically identifying, by the computing device, differences between the build output code and associated in-production software code; automatically mapping, by the computing device, the differences to microservices of the in-production software code; and generating, by the computing device, a list of microservices of the in-production software code affected by the differences in a rollout of the build output code based on the mapping. | 2019-12-26 |
20190391806 | DETERMINATION APPARATUS, DETERMINATION METHOD, AND DETERMINATION PROGRAM - A determination apparatus includes: a feature information extraction unit configured to extract, as feature information, function definition information as information for defining a function and function calling order information in which function names to be executed in the function are written in execution order from each of an input source code and a byte code of a program; and a similarity calculation unit-configured to calculate a similarity between a function in the source code and a function in the byte code by using the feature information extracted by the feature information extraction unit. | 2019-12-26 |
20190391807 | COMPUTER-READABLE RECORDING MEDIUM STORING OPTIMIZATION PROBLEM COMPUTING PROGRAM AND OPTIMIZATION PROBLEM COMPUTING SYSTEM - A processing unit generates a first graph that has a plurality of vertices respectively corresponding to all variables included in an objective function and has edges each connecting two vertices to indicate an existence of interaction between corresponding variables, generates a second graph, which is an abstraction of the first graph, by repeatedly merging two vertices connected by an edge into one vertex in the first graph, classifies all variables into candidates for variable groups to be respectively used for partial problems and a candidate for a boundary variable group to be used for computing a complete solution to a combinatorial optimization problem, based on the connection relationship among a plurality of vertices included in the second graph and a partition count, and determines the variable groups and boundary variable group, based on these candidates by reference to the connection relationship among the vertices included in the first graph. | 2019-12-26 |
20190391808 | APPARATUS AND METHOD FOR RESYNCHRONIZATION PREDICTION WITH VARIABLE UPGRADE AND DOWNGRADE CAPABILITY - A method and apparatus generates control information that indicates whether to change a counter value associated with a particular load instruction. In response to the control information, the method and apparatus causes a hysteresis effect for operating between a speculative mode and a non-speculative mode based on the counter value. The hysteresis effect is in favor of the non-speculative mode. The method and apparatus causes the hysteresis effect by incrementing the counter value associated with the particular load instruction by a first value or decrementing the counter value by a second value. The first value is greater than the second value. | 2019-12-26 |
20190391809 | PROGRAMS WITH SERIALIZABLE STATE - Described herein are techniques for suspending execution of a process, including through serializing execution state of the process. Through serializing the execution state, the execution state can be converted to a byte string for output. In some embodiments, an executing process that has been suspended may be resumed, through deserializing the execution state. For example, a byte string that is a serialized execution state of a suspended process may be deserialized to generate one or more data objects for the execution state of the suspended process, and a process may be configured with the data objects resulting from the deserializing. By configuring the process with the data objects resulting from the deserializing, the process may take on the execution state of the suspended executing process and resume execution from the point of suspension. The process that may be suspended may be an instance of a request/response application. | 2019-12-26 |
20190391810 | LOW LATENCY EXECUTION OF FLOATING-POINT RECORD FORM INSTRUCTIONS - A computer processing system is provided. The computer processing system includes a processor configured to execute a record form instruction cracked into two internal instructions. A first one of the two internal instructions executes out-of-order to compute a target register and a second one of the two internal instructions executes in-order to compute a condition register (CR) to improve a processing speed of the record form instruction. | 2019-12-26 |
20190391811 | MULTI-VARIATE STRIDED READ OPERATIONS FOR ACCESSING MATRIX OPERANDS - In one embodiment, a matrix processor comprises a memory to store a matrix operand and a strided read sequence, wherein: the matrix operand is stored out of order in the memory; and the strided read sequence comprises a sequence of read operations to read the matrix operand in a correct order from the memory. The matrix processor further comprises circuitry to: receive a first instruction to be executed by the matrix processor, wherein the first instruction is to instruct the matrix processor to perform a first operation on the matrix operand; read the matrix operand from the memory based on the strided read sequence; and execute the first instruction by performing the first operation on the matrix operand. | 2019-12-26 |
20190391812 | CONDITIONAL EXECUTION SPECIFICATION OF INSTRUCTIONS USING CONDITIONAL EXTENSION SLOTS IN THE SAME EXECUTE PACKET IN A VLIW PROCESSOR - In one embodiment, a system includes a memory and a processor core. The processor core includes functional units and an instruction decode unit configured to determine whether an execute packet of instructions received by the processing core includes a first instruction that is designated for execution by a first functional unit of the functional units and a second instruction that is a condition code extension instruction that includes a plurality of sets of condition code bits, wherein each set of condition code bits corresponds to a different one of the functional units, and wherein the sets of condition code bits include a first set of condition code bits that corresponds to the first functional unit. When the execute packet includes the first and second instructions, the first functional unit is configured to execute the first instruction conditionally based upon the first set of condition code bits in the second instruction. | 2019-12-26 |
20190391813 | LOW LATENCY SYNCHRONIZATION FOR OPERATION CACHE AND INSTRUCTION CACHE FETCHING AND DECODING INSTRUCTIONS - The techniques described herein provide an instruction fetch and decode unit having an operation cache with low latency in switching between fetching decoded operations from the operation cache and fetching and decoding instructions using a decode unit. This low latency is accomplished through a synchronization mechanism that allows work to flow through both the operation cache path and the instruction cache path until that work is stopped due to needing to wait on output from the opposite path. The existence of decoupling buffers in the operation cache path and the instruction cache path allows work to be held until that work is cleared to proceed. Other improvements, such as a specially configured operation cache tag array that allows for detection of multiple hits in a single cycle, also improve latency by, for example, improving the speed at which entries are consumed from a prediction queue that stores predicted address blocks. | 2019-12-26 |
20190391814 | IMPLEMENTING FIRMWARE RUNTIME SERVICES IN A COMPUTER SYSTEM - An example method of implementing firmware runtime services in a computer system having a processor with a plurality of hierarchical privilege levels, the method including: calling, from software executing at a first privilege level of the processor, a runtime service stub in a firmware of the computer system; executing, by the runtime service stub, an upcall instruction from the first privilege level to a second privilege level of the processor that is more privileged than the first privilege level; and executing, by a handler, a runtime service at the second privilege level in response to execution of the upcall instruction. | 2019-12-26 |
20190391815 | INSTRUCTION AGE MATRIX AND LOGIC FOR QUEUES IN A PROCESSOR - An information handling system and method is disclosed for processing information that in an embodiment includes at least one processor; at least one queue associated with the processor for holding instructions; and at least one age matrix associated with the queue for determining the relative age of the instructions held within the queue, including in situations where if multiple instructions enter the queue at the same time, age comparison calculations are first performed by comparing each simultaneous incoming instruction independently to instructions already in the queue, and then performing age calculations between the simultaneous incoming instructions. In one aspect, if the incoming instruction is older than any in-thread instruction already in the queue, then assigning for the older in-thread instruction in the age matrix the age of the next youngest in-thread instruction already in the queue. | 2019-12-26 |
20190391816 | BLADE SERVER - A blade server with an apparatus for configuring the blade server is disclosed. the blade server includes at least one data processor and the data processor is configured to determine presence of a response file at a remote management module, upon deployment of the blade server, in response to the response file being present, receive the response file from the remote management module, and retrieve an ISO of a desired operating system for the blade server in accordance with data stored in the response file, in order to install the desired operating system for the blade server. | 2019-12-26 |
20190391817 | BOOT AUTHENTICATION - Examples associated with boot authentication are described. One example includes initiating a power on self-test (POST) phase of a boot of a system. Prior to initiating a driver execution environment phase of the POST phase, a network stack may be loaded for a network port. An encrypted key may be retrieved from a trusted component of the system. Boot of the system may be permitted to proceed upon establishing a connection with an authentication server, and authenticating the system to the authentication server based on the encrypted key. | 2019-12-26 |
20190391818 | RESOURCE-BASED BOOT SEQUENCE - A computer-implemented method, for booting a computer system, that provides a list with entries of startup processes. Each startup process defines a resource of the computer system. For each startup process a requirement is defined. The method further comprises fetching one of the entries of the list with entries of startup processes; determining whether the requirement is satisfied for the one of the entries of the list with entries of startup processes; fetching, in case the requirement is not fulfilled, a next one of the entries of the list with entries of startup processes; starting, in case the required resource is fulfilled, the startup process; and repeating the fetching a next one of the entries, the determining and the starting until all startup processes of the list of startup processes have been started. | 2019-12-26 |
20190391819 | COMMUNICATION DEVICE, SERVER, COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND PROGRAM - A communication device includes a first communicator, a second communicator, an identifier acquirer, an identifier transmitter, a communication program acquirer, and a communication program executor. The first communicator communicates with a device. The second communicator communicates with a server. The identifier acquirer acquires from the device via the first communicator an identifier for identifying the device. The identifier transmitter transmits to the server the identifier acquired by the identifier acquirer. The communication program acquirer acquires from the server via the second communicator the communication program associated with the identifier transmitted by the identifier transmitter. The communication program executor executes the communication program acquired by the communication program acquirer. | 2019-12-26 |
20190391820 | METHOD FOR SETTING DISPLAY PANEL DYNAMICALLY AND ELECTRONIC DEVICE - A method for setting a display panel dynamically and an electronic device are provided. In a booting stage of the electronic device, a display driver is executed, wherein a motherboard of the electronic device includes at least one specified pin, a storage device and a processor. A predetermined pin value is set in the at least one specified pin and read from the at least one specified pin of the motherboard through the display driver. A database is queried through the display driver and includes multiple reference pin values corresponding to multiple sets of parameter values. The set of parameter values corresponding to the predetermined pin value is obtained according to the reference pin values; and the display panel is initialized through the display driver using the set of parameter values corresponding to the predetermined pin value. | 2019-12-26 |
20190391821 | METHOD AND APPARATUS FOR PLUG AND PLAY, NETWORKABLE ISO 18000-7 CONNECTIVITY - A device may comprise a Universal Serial Bus (USB) interface and a wireless interface operable to communicate in accordance with the ISO 18000-7 standard. The device may be operable to receive a command via the USB interface and transmit the command via the wireless interface. The device may be operable to receive data via the wireless interface and transmit the data via the USB interface. A form factor of the USB device may be such that it can be plugged directly into a USB port without any external cabling between the USB device and said USB port. | 2019-12-26 |
20190391822 | Application Group Operation Method and Terminal - An application group operation method and a terminal are disclosed, where the method is applied to a terminal having a display screen, and the method includes receiving a first operation on a first folder in a user interface of the terminal, and obtaining at least one operation option of the first folder, where the at least one operation option is determined based on application configuration files of M APPs in the first folder, and M is an integer greater than 0, and when a first operation option is triggered, executing an operation command corresponding to the first operation option, where the first operation option is one of the at least one operation option. | 2019-12-26 |
20190391823 | APPLICATION DEPLOYMENT - A method of deploying an application is provided. The method includes publishing a first code package to a package registry and publishing one or more further code packages to the package registry. The first code package can include code specifying a first definition of a class and at least one of the one or more further code packages comprises code specifying a further definition of the class. The further definition of the class comprises prototype merging so that on compilation the first definition of the class and the extended definition of the class are loaded as a single class, and module augmentation so that the first and extended definitions of the class are treated as a single merged class by development tools. | 2019-12-26 |
20190391824 | PARAMETER CONFIGURATION SYSTEM OF ELECTRONIC DEVICE - An operation parameter configuration method includes configuring at least two groups of operation parameters of an application, detecting a startup signal of the application in real time, confirming one of the at least two groups of operation parameters according to the startup signal, and starting the application in a foreground of the electronic device according to one confirmed group of operation parameters. The at least two groups of operation parameters include a group of default operation parameters and a group of optimal operation parameters. The group of optimal operation parameters are calculated according to a history of execution of the application by an electronic device. The group of optimal operation parameters is calculated according to a history of execution of the application in the foreground of the electronic device. | 2019-12-26 |
20190391825 | USER INTERFACE FOR NAVIGATING MULTIPLE APPLICATIONS - In one general aspect, a method and system are described for identifying a plurality of functions associated with an application that is operable on a first software platform, identifying a plurality of user interface aspects of the application, identifying a plurality of navigational aspects of the application, generating a reformatted user interface capable of executing the plurality of functions on a second software platform. | 2019-12-26 |
20190391826 | NETWORK METHOD AND APPARATUS - A method of operating a computer network comprising a communications device ( | 2019-12-26 |
20190391827 | INTELLIGENT ASSISTANT FOR USER-INTERFACE - Artificial intelligence systems and methods providing enhanced prediction of information relevant to a conversation are disclosed. The method includes monitoring a conversation between a requestor and a provider. The method also includes determining metadata and text of the conversation. The method further includes determining a regional status of the requestor based on the metadata and text of the conversation, regional information, and regional classification rules. Additionally, the method includes determining a local status of the requestor based on the text of the conversation, the regional status, local information, and local classification rules. Moreover, the method includes determining suggestions based on the regional status, the local status, transactional status information, and transactional classification rules. Further, the method includes providing the suggestions to a user-interface device of the provider. | 2019-12-26 |
20190391828 | METHOD OF CONTROLLING DIALOGUE SYSTEM, DIALOGUE SYSTEM, AND STORAGE MEDIUM - A dialogue system includes an inquiry step of generating and outputting inquiry information, an input step of accepting a reply, and a guidance step of generating and outputting candidates of guidance information corresponding to the answer. The dialogue system includes a mode for outputting options based on the inquiry information to the touch panel and a dialogue mode for outputting comments based on the inquiry information by the touch panel or the sound output device and selects them according to the operation situation of the dialog system A mode switching step, and the inquiring step and the guiding step use the selected mode. | 2019-12-26 |
20190391829 | APPARATUS AND METHOD FOR PROVIDING A VIRTUAL DEVICE - An apparatus for providing a virtual device has a circuitry. The circuitry searches for resource devices providing characteristics of a set of characteristics of the virtual device in a distributed ledger. The distributed ledger includes information about multiple resource devices. The circuitry provides the virtual device by selecting resource devices providing characteristics of the set of characteristics of the virtual device. | 2019-12-26 |
20190391830 | SYSTEM AND METHOD OF EMULATING EXECUTION OF FILES BASED ON EMULATION TIME - Disclosed are systems and methods for emulating execution of a file based on emulation time. In one aspect, an exemplary method comprises, generating an image of a file, emulating an execution of instructions from the image for a predetermined emulation time, the emulation including: when an emulation of an execution of instruction from an image of another file is needed, generating an image of the another file, detecting known set of instructions in portions read from the image, inserting a break point into a position in the generated image corresponding to a start of the detected set of instructions, emulating execution of the another file by emulating execution of instructions from the generated image, and adding corresponding records to an emulation log, and reading a next portion from the image of the another file and repeating the emulation until the predetermined emulation time has elapsed. | 2019-12-26 |
20190391831 | SEAMLESS VIRTUAL STANDARD SWITCH TO VIRTUAL DISTRIBUTED SWITCH MIGRATION FOR HYPER-CONVERGED INFRASTRUCTURE - A method to migrate a cluster's hosts and virtual machines from virtual standard switches to a virtual distributed switch includes creating distributed port groups on the virtual distributed switch, where properties of the distributed port groups are automatically replicated to host proxy switches on the hosts. The method further includes configuring the distributed port group with ephemeral binding so port binding of the distributed port group is configurable through a host in the cluster even when an infrastructure virtual machine that manages the cluster is down, determining (or receiving user input indicating) the infrastructure virtual machine is on the host, and issuing a call to the host to migrate (1) the infrastructure virtual machine to the distributed port group and (2) one or more physical network interface cards of the host to the virtual distributed switch. The method also includes migrating other virtual machines on the host to the virtual distributed switch while tolerating any inaccessible VM on the host due to network partition. | 2019-12-26 |
20190391832 | VIRTUAL MACHINE MIGRATION USING TRACKED ERROR STATISTICS FOR TARGET FIBRE CHANNEL PORTS - The disclosure relates to migration of virtual machines. In an example implementation, migration of a virtual machine (VM) is initiated from a source hypervisor to a destination hypervisor. A destination fibre channel (FC) port associated with the destination hypervisor is assigned to support a virtual initiator port of the VM upon migration, where the destination FC port is assigned using at least error statistics collected for the destination FC port. The VM is migrated from the source hypervisor to the destination hypervisor by supporting the virtual initiator port of the VM on the assigned destination FC port associated with the destination hypervisor. | 2019-12-26 |
20190391833 | SYSTEM AND METHOD FOR MANAGING TELEMETRY DATA AND AGENTS IN A TELEMETRY SYSTEM - A system and method include determining, by a telemetry control system of a telemetry system that an agent associated with the telemetry control system terminated during operation. The agent collects telemetry data from data sources associated with the telemetry system. The system and method also include determining that a number of times the agent has terminated is greater than a predetermined threshold, restarting the agent after a first predetermined delay in response to exceeding the predetermined threshold, and determining that the agent terminated again within a predetermined time period upon restarting. The system and method further include updating a configuration file of the agent in response to the termination within the predetermined time period and restarting the agent with the updated configuration file. The updating is based upon an agent termination record of the agent. | 2019-12-26 |
20190391834 | EXECUTION OF AUXILIARY FUNCTIONS IN AN ON-DEMAND NETWORK CODE EXECUTION SYSTEM - Systems and methods are described for providing auxiliary functions in an on-demand code execution system in a manner that enables efficient execution of code. A user may generate a task on the system by submitting code. The system may determine the auxiliary functions that the submitted code may require when executed on the system, and may provide these auxiliary functions by provisioning sidecar virtual machine instances that work in conjunction with the virtual machine instance executing the submitted code. The sidecars may provide auxiliary functions on a per-task, per-user, or per-request basis, and the lifecycles of the sidecars may be determined based on the lifecycles of the virtual machine instances that execute submitted code. Auxiliary functions may thus be provided only when needed, and may be provided securely by preventing a user from accessing the sidecars of other users. | 2019-12-26 |
20190391835 | SYSTEMS AND METHODS FOR MIGRATION OF COMPUTING RESOURCES BASED ON INPUT/OUTPUT DEVICE PROXIMITY - In accordance with embodiments of the present disclosure, an information handling system may include a plurality of host systems and a hypervisor manager comprising a program of instructions configured to, when read and executed by a processor of one of the plurality of host systems, in response to a command for migrating a computing resource executing on one of the plurality of host systems, select a host system as a target for migrating the computing resource based on a proximity of input/output devices of the host system with respect to a proximity domain of the host system, and migrate the computing resource to the host system selected as the target. | 2019-12-26 |
20190391836 | OPERATION MANAGEMENT APPARATUS, MIGRATION DESTINATION RECOMMENDATION METHOD, AND STORAGE MEDIUM - An operation management apparatus includes a processor. The processor generates a VM load model for each virtual machine running on an information processing system, generates resource utilization rate estimation data based on VM load models of a virtual machine group running on the physical machine and a VM load model of a first virtual machine, for each of physical machines except for a first physical machine on which the first virtual machine is running, generates a resource competition occurrence model based on the resource utilization rate of the physical machine, calculates a statistical value of competition occurrence probabilities of the resource, for each of the physical machines except for the first physical machine, based on the resource utilization rate estimation data and the resource competition occurrence model, specifies the migration destination physical machine based on the statistical value, and outputs information of a specified migration destination physical machine. | 2019-12-26 |
20190391837 | PROCESSING DIVISION DEVICE, SIMULATOR SYSTEM AND PROCESSING DIVISION METHOD - A processing division device ( | 2019-12-26 |
20190391838 | HYPERVISOR FOR SHARED SPECTRUM CORE AND REGIONAL NETWORK ELEMENTS - Systems and methods include a manager for core network elements, regional network elements, and other network elements to facilitate use of and compatibility with shared access systems. | 2019-12-26 |
20190391839 | TECHNIQUES FOR MIGRATION PATHS - Exemplary embodiments described herein relate to a destination path for use with multiple different types of VMs, and techniques for using the destination path to convert, copy, or move data objects stored in one type of VM to another type of VM. The destination path represents a standardized (canonical) way to refer to VM objects from a proprietary VM. A destination location may be specified using the canonical destination path, and the location may be converted into a hypervisor-specific destination location. A source data object may be copied or moved to the destination location using a hypervisor-agnostic path. | 2019-12-26 |
20190391840 | MEMORY MODULE - The access control circuit writes to the first storage unit a context information transmitted in one cycle from the CPU through the first bus, a context number identifying the context information, and a link context number identifying the context information transmitted from the CPU prior to the interrupt when the request for evacuating the task context information is received by the interrupt. After writing to the first storage unit, the access control circuit transfers the data including the context information and the link context number stored in the first storage unit to the second storage unit in a plurality of cycles through the internal bus (second bus) in association with the context number stored in the first storage unit. | 2019-12-26 |
20190391841 | EXECUTION OF AUXILIARY FUNCTIONS IN AN ON-DEMAND NETWORK CODE EXECUTION SYSTEM - Systems and methods are described for providing auxiliary functions in an on-demand code execution system in a manner that enables efficient execution of code. A user may generate a task on the system by submitting code. The system may determine the auxiliary functions that the submitted code may require when executed on the system, and may provide these auxiliary functions by provisioning sidecar virtual machine instances that work in conjunction with the virtual machine instance executing the submitted code. The sidecars may provide auxiliary functions on a per-task, per-user, or per-request basis, and the lifecycles of the sidecars may be determined based on the lifecycles of the virtual machine instances that execute submitted code. Auxiliary functions may thus be provided only when needed, and may be provided securely by preventing a user from accessing the sidecars of other users. | 2019-12-26 |
20190391842 | APPARATUS AND METHOD TO PROVIDE HELP INFORMATION TO A USER IN A TIMELY MANNER - An apparatus stores status information and workload information for each task executed by a user. The apparatus detects, based on the status information, completion of a first task, and withholds notification of first help information related to software selected based on a usage state of the software in the first task. The apparatus detects, based on the status information, completion of a second task after completion of the first task, and calculates, based on the workload information, an index value indicating a total workload of completed tasks including the first and second tasks. When the index value is greater than a threshold, the apparatus allows providing the user with notification of the first help information and second help information related to the software selected based on a usage state of the software in the second task; otherwise the apparatus withholds notification of the first and second help information. | 2019-12-26 |
20190391843 | SYSTEM AND METHOD FOR BACKING UP VIRTUAL MACHINE MEMORY WITH SHARED STORAGE FOR LIVE MIGRATION - A system and method include initiating a live migration of a virtual machine from a first host machine to a second host machine. A shared host physical storage includes first swapped out memory data associated with the first virtual machine from a first memory of the first host machine, and metadata including location information of the first swapped out memory data, and an identity of the associated first virtual machine. The system and method include copying memory data associated with the first virtual machine stored in the first memory to the second host machine. The system and method also include accessing, by a second hypervisor at the second host machine, the metadata stored in the shared host physical storage to determine location of the first swapped out memory data associated with the first virtual machine. | 2019-12-26 |
20190391844 | TASK ORCHESTRATION METHOD AND SYSTEM - Embodiments of the disclosure provide a method and system for task orchestration. A method may include: providing, by a task master control unit, an execution instruction of a task related to a module in an application container to a node agent service unit in an auxiliary application container bound to the application container, the auxiliary application container sharing a file system with the application container; and executing, by the node agent service unit, a command for completing the task, in response to acquiring the execution instruction of the task. | 2019-12-26 |
20190391845 | DYNAMIC TIME SLICING FOR DATA-PROCESSING WORKFLOW - A method for dynamically scheduling a data-processing workload includes recognizing minimum and maximum execution slice sizes and predicting an execution slice size for a current job of a collection of jobs. If the predicted execution slice size exceeds the maximum slice size or if the job involves date-dependent records in the future of the current date, the job is split into a working slice and a remainder slice, the remainder slice is added to the collection of jobs and the working slice is executed. Otherwise, if the predicted execution slice size is between the minimum and maximum execution slice sizes, the current job is executed. | 2019-12-26 |
20190391846 | SEMICONDUCTOR INTEGRATED CIRCUIT, CPU ALLOCATION METHOD, AND PROGRAM - The semiconductor integrated circuit includes a plurality of CPUs (big CPU and LITTLE CPU). Each of CPUs has a different performance respectively. The semiconductor integrated circuit determines an effective CPU allocated to a task realized by at least one of the plurality of functional blocks according to the device table defining a relationship between the plurality of functional blocks and any one of the plurality of CPUs. | 2019-12-26 |
20190391847 | Resource Scheduling Method and Related Apparatus - A resource scheduling method and a related resource scheduling apparatus to improve data input/output (I/O) efficiency, where the method includes determining a current task queue, where the current task queue includes a plurality of to-be-executed application tasks, determining, for data blocks on a disk to be accessed by the application tasks, a quantity of times that each data block is to be accessed by the application tasks, determining a hotspot data block according to the quantity of times that each data block is to be accessed by the application tasks, and sending a move-in instruction to a local node of the hotspot data block, where the move-in instruction instructs to move the hotspot data block into a memory such that the hotspot data block can be accessed in the memory. | 2019-12-26 |
20190391848 | Method for Controlling Fingerprint Processing Resources, Terminal, and Computer Readable Storage Medium - Provided are a method for controlling fingerprint processing resources, a terminal, and a computer readable storage medium. The method includes the following. A terminal adds, in a predetermined order, N access requests for the fingerprint processing resources initiated concurrently by N applications of the terminal to a predetermined access queue upon detecting the N access requests, where, in the predetermined access queue, an access request first added is first processed, and N is an integer greater than one. The terminal allocates the fingerprint processing resources to an application corresponding to an access request currently processed in the predetermined access queue, and updates the access request currently processed in the predetermined access queue according to a duration in which the application occupies the fingerprint processing resources. | 2019-12-26 |
20190391849 | Method for Processing Service - According to one exemplary embodiment of the present disclosure, a computer program stored in a computer readable storage medium is disclosed. The computer program may make operations for processing a service to be performed when the computer program is executed in one or more processors of a computing device, and the operations may include: an operation of allocating, by a control thread, processing of a first service to one worker thread among worker threads; an operation of performing, by the worker thread, the processing of the first service and determining whether a call of a second service is required for processing the first service; an operation of transferring, by the worker thread, service call information to an interworking support unit when the worker thread determines that the call of the second service is required for processing the first service; an operation of receiving, by the interworking support unit, a processing result of the second service; and an operation of transferring, by the interworking support unit, the processing result of the second service to the worker thread or another worker thread so that the processing of the first service is resumed. | 2019-12-26 |
20190391850 | METHOD AND SYSTEM FOR OPPORTUNISTIC LOAD BALANCING IN NEURAL NETWORKS USING METADATA - Methods and systems for opportunistic load balancing in deep neural networks (DNNs) using metadata. Representative computational costs are captured, obtained or determined for a given architectural, functional or computational aspect of a DNN system. The representative computational costs are implemented as metadata for the given architectural, functional or computational aspect of the DNN system. In an implementation, the computed computational cost is implemented as the metadata. A scheduler detects whether there are neurons in subsequent layers that are ready to execute. The scheduler uses the metadata and neuron availability to schedule and load balance across compute resources and available resources. | 2019-12-26 |
20190391851 | SYSTEM AND METHOD FOR MANAGING MEMORY IN VIRTUAL MACHINES - A system and method include managing allocation of host physical memory to a guest physical memory of a virtual machine running on a computing node. The node includes hardware resources that are mapped the guest physical memory by a hypervisor. The hypervisor allocates a first amount of the host physical memory to the guest physical memory. The hypervisor also receives first page fault information. The hypervisor determines, based on the first page fault information, a first page fault rate. The hypervisor also determines that the first page fault rate is greater than a threshold rate, and allocates a second amount, greater than the first amount, of the host physical memory to the guest physical memory. | 2019-12-26 |
20190391852 | PROCESSING ELEMENT RESTART PRECEDENCE IN A JOB OVERLAY ENVIRONMENT - Embodiments generally relate to processing element restart precedence in a job overlay environment. In some embodiments, a method includes determining a job overlay, wherein the job overlay involves updates to a subset of processing elements of a plurality of processing elements of a job. The method further includes determining processing requirements of the plurality of processing elements. The method further includes determining computation capabilities of computational resources associated with the plurality of processing elements. The method further includes determining a processing element restart order based at least in part on processing requirements and computation capabilities. The method further includes updating the subset of processing elements. The method further includes restarting the subset of processing elements based at least in part on the processing element restart order. | 2019-12-26 |
20190391853 | MULTI-TIER COORDINATION OF DESTRUCTIVE ACTIONS - A distributed storage network (DSN) processes storage unit maintenance tasks on multiple tiers within the DSN. A master storage unit coordinates pending maintenance tasks when a DSN management unit, originally processing the pending maintenance tasks, changes its status to offline. The method includes the master storage unit aggregating pending maintenance tasks from corresponding DSN storage units into an ordered list of maintenance tasks, facilitating, based on the ordered list of maintenance tasks, coordination of a next maintenance task with a corresponding storage unit and directing execution of the next maintenance task by the corresponding storage unit. | 2019-12-26 |
20190391854 | ATTRIBUTE COLLECTION AND TENANT SELECTION FOR ON-BOARDING TO A WORKLOAD - A tenant model models workload usage of tenants, based upon a set of tenant attributes. The model is applied to a set of tenants waiting to be on-boarded to a workload to identify a metric indicative of likely tenant usage of the workload. A subset, of the set of tenants, are identified for on-boarding, based upon the metric, and on-boarding functionality is controlled to the identified subset of tenants. | 2019-12-26 |
20190391855 | TECHNOLOGIES FOR PROVIDING EFFICIENT ACCESS TO DATA IN AN EDGE INFRASTRUCTURE - Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target. | 2019-12-26 |
20190391856 | SYNCHRONIZATION OF MULTIPLE QUEUES - Particular embodiments described herein provide for an electronic device that can be configured to process a plurality of descriptors from a queue, determine that a descriptor is a barrier descriptor, stop the processing of plurality of descriptors from the queue, extract a global address from the barrier descriptor, communicate a message to the global address that causes a counter associated with the global address to be incremented, determine contents of a counter at the global address, perform an action when the contents of the counter at the global address satisfies a threshold, and continue to process descriptors from the queue. | 2019-12-26 |
20190391857 | Consolidating Read-Copy Update Flavors Having Different Notions Of What Constitutes A Quiescent State - A technique for consolidating RCU flavors having different notions of what constitutes a quiescent state that allows destructive-to-reader actions to be performed following an associated RCU grace period. The technique may include monitoring for a quiescent state by checking first quiescent state criteria that are indicative of a CPU having no task running inside an RCU read-side critical section that could be affected by the destructive-to-reader actions. If the quiescent state has been reached, a check may be made for the existence of a condition that is indicative of a requirement to satisfy one or more additional quiescent state criteria before reporting the quiescent state on behalf of the CPU. If the condition is detected, reporting of the quiescent state may be deferred until the one or more additional quiescent state criteria are satisfied. The quiescent state may then be reported if it is useful and safe to do so. | 2019-12-26 |
20190391858 | SHARED APPLICATION INTERFACE DATA THROUGH A DEVICE-TO-DEVICE COMMUNICATION SESSION - There are provided systems and methods for shared application interface data through a device-to-device communication session. A user may utilize a device to engage in an electronic communication session with another user, such as a shared messaging or video chat session. During the session, the user may utilize another application on the same device to perform separate application data processing, such as accessing a website or an online marketplace that may include interface output data. The user may activate a plug-in or add-on that may allow application data sharing for current application data in the separate application during the communication session. The device may determine the present application data, such as a displayable instance of the current interface data, and may transmit the data to the other user's device through the communication session. Further, the plug-in may allow for split transaction and data processing. | 2019-12-26 |
20190391859 | SYSTEMS AND METHODS FOR IMPLEMENTING AN INTELLIGENT APPLICATION PROGRAM INTERFACE FOR AN INTELLIGENT OPTIMIZATION PLATFORM - Systems and methods for implementing an application programming interface (API) that controls operations of a machine learning tuning service for tuning a machine learning model for improved accuracy and computational performance includes an API that is in control communication the tuning service that: executes a first API call function that includes an optimization work request that sets tuning parameters for tuning hyperparameters of a machine learning model; and initializes an operation of distinct tuning worker instances of the service that each execute distinct tuning tasks for tuning the hyperparameters; executes a second API call function that identifies raw values for the hyperparameters; and generates suggestions comprising proposed hyperparameter values selected from the plurality of raw values for each of the hyperparameters; and executes a third API call function that returns performance metrics relating to a real-world performance of the subscriber machine learning model executed with the proposed hyperparameter values. | 2019-12-26 |
20190391860 | ENABLING SYNCHRONOUS EDITABLE SIGNALS IN PROCESS MODELING - The present disclosure involves systems, software, and computer implemented methods for enabling synchronous editable signals in process modeling. One example method includes receiving, at a receiver component, a message from a sending component as part of execution of an integration scenario with an external system. The receiver component is an originator that is configured to send event data to at least one registered listener task that has been bound to the receiver. Each registered listener is provided with the event data upon execution and is enabled to enhance the received event data. The receiver component waits to receive a completion notification from each registered listener and generates an acknowledgement to be sent to the sending component, using the event data enhanced by the at least one registered listener. The generated acknowledgment is sent to the sending component. | 2019-12-26 |
20190391861 | PRESENTING COLLABORATION ACTIVITY - Systems and methods for presenting collaboration activity to a particular user are disclosed. A method embodiment commences by recording event records that codify one or more event attributes corresponding to one or more content object access events. The content object access events are associated with two or more users that interact with the content objects. At a later moment in time, a subset of event records is selected, the selection being based at least in part on timestamps of the content object access events. A display order to apply to the selected subset of event records is determined, the order being based at least in part on timestamps of collaboration events arising from the users. Event messages to present in a user interface are generated, and the event messages are then displayed in the user interface in accordance with the display order. | 2019-12-26 |
20190391862 | SERVICE PROVIDING SYSTEM, SERVICE PROVIDING METHOD, TERMINAL CONTROL METHOD, AND NON-TRANSITORY RECORDING MEDIUM - A server includes a provider that provides, when a contents ID to identify contents is designated from a terminal, the contents associated with the contents ID to the terminal. The terminal includes a display that displays the contents obtained from the server on a screen. The provider puts predetermined suppress information in the contents when a busy level of the server is higher than a predetermined threshold. When a present date and time is within a display time period set to a popup message and the contents displayed on the screen do not contain the predetermined suppress information, the display displays, in the screen, the popup message so as to be laid over on the contents or displays the popup message instead of the contents, and when the displayed contents contain the predetermined suppress information, the display keeps displaying the contents in the screen. | 2019-12-26 |