04th week of 2016 patent applcation highlights part 47 |
Patent application number | Title | Published |
20160026422 | SYSTEMS AND METHODS FOR PROVIDING ACCESSORY DISPLAYS FOR ELECTRONIC DEVICES - Systems and methods herein provide for an accessorizeable display of features for a processing device, such as a smart phone, a tablet computer, a laptop computer, or other processing devices. The display may be configured as a case on the “backside” or cover of the processing device that interfaces through a communication port of the processing device. In one embodiment, a system includes a display data module operable on the processing device to provide a graphical user interface via a first display device to a user of the processing device, and to interface with a second display device coupled to the processing device to display an image on the second display device. The system also includes a remote data center operable to retrieve the image and to communicate with the display data module through a network to provide the image to the second display device via the display data module. | 2016-01-28 |
20160026423 | MOUNTABLE DISPLAY DEVICES - The present disclosure provides display devices and methods. A display device can include a visual curvilinear display mounted on a support member. A user may display or project media through the visual curvilinear display according to a display and/or location preference or schedule of the user. | 2016-01-28 |
20160026424 | DISPLAY APPARATUS AND CONTROLLING METHOD OF DISPLAY APPARATUS - A display apparatus including a plurality of display devices and a control system is provided. Each of the display devices has at least two sensing units. When the display devices are arranged in sequence to commonly constitute a display interface, at least one of the sensing units of each of the display devices is aligned to one of the sensing units of another one of the display devices to generate a sensing signal. The control system is electrically connected to each of the display devices. The control system determines an arranged position of each of the display devices according to each of the sensing signals, and controls a displayed image of each of the display devices according to the arranged position of each of the display devices. In addition, a controlling method of the display apparatus is also provided. | 2016-01-28 |
20160026425 | MOBILE TERMINAL AND CONTROL METHOD FOR THE MOBILE TERMINAL - A mobile terminal including a wireless communication unit configured to wirelessly communicate with an external device; a touch screen configured to sense a touch input, and switch between an active state and an inactive state; and a controller configured to receive a plurality of touch inputs applied to the touch screen in the inactive state, and when the received touch inputs satisfy a preset criteria, control the external device to release a locked state of the external device, activate a first region of the touch screen in the inactive state, and display screen information corresponding to the external device in the activated first region of the touch screen. | 2016-01-28 |
20160026426 | IMAGE DISPLAY DEVICE AND METHOD OF CONTROLLING THESAME - Disclosed herein are an image display device and a method of controlling the image display device. The image display device includes an image acquirer configured to acquire a user image, an image outputter configured to display the user image, a communicator configured to perform communication with a mobile terminal, a controller configured to perform a real time image display operation. The real time image display operation includes displaying the user image in real time and transmitting image data processed from the user image to the mobile terminal when a real time image display command is input. | 2016-01-28 |
20160026427 | Audio Settings - Method and systems are provided for a playback device to play a media item using an audio setting corresponding to the media item and characteristics of the playback device. The characteristics of the playback device may be one or more of a model of the playback device, an orientation of the playback device, or a playback group configuration of the playback device, among other possibilities. The audio setting may be determined specifically for the characteristics of the playback device, by a provider (i.e. an artist and/or producer of the media item, and/or a curator of a playlist including the media item, among others). The media item, when rendered by the playback device, may sound as the provider intended. | 2016-01-28 |
20160026428 | Device Grouping - An example method includes detecting, via a control interface of a first playback device while at least one second playback device is playing media, an input indicating a command for the first playback device to (i) form a zone with the second playback device and (ii) play back the media in synchrony, and based on the detected input, causing the first playback device to carry out the command. A second example method includes detecting, by a control device while at least one first playback device is playing media, an input indicating a command for a second playback device to (i) form a zone with the first playback device and (ii) play back the media in synchrony, wherein the control device is configured to transmit commands to the second playback device; and based on the detected input, causing the second playback device to carry out the command. | 2016-01-28 |
20160026429 | Zone Grouping - An example method involves causing a control device to display a graphical user interface that comprises an indication of a first zone of a media playback system, wherein the media playback system comprises the first zone and a second zone, and wherein the graphical user interface does not comprise an indication of the second zone. The example method further involves detecting, by the control device, an input that indicates a command to cause the first zone to form a zone group with the second zone and play back a target media in synchrony with the second zone. The method further comprises, based on the detected input, causing the first zone to form a zone group with the second zone and play back the target media in synchrony with the second zone. | 2016-01-28 |
20160026430 | DEVICE-SPECIFIC CONTROL - According to an example aspect of the present invention, there is provided an apparatus comprising at least one processing core, memory including computer program code, the memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to receive from a plurality of physical devices indications relating to locations of the physical devices, wherein each physical device corresponds to a virtual space element, compute, for at least one of the plurality of physical devices, a sound level for a sound associated with a virtual space event, and cause transmission, to each of the plurality of physical devices, information identifying the sound and a sound level specific to the individual physical device. | 2016-01-28 |
20160026431 | MOBILE DEVICE AND CONTROLLING METHOD THEREFOR - The present invention relates to a mobile device and a method for controlling the same. In particular, the present invention relates to a mobile device and a method for controlling the same, which provides response date to a voice command by variously setting a feedback scheme for providing the response data depending on the way that a user uses the mobile device. | 2016-01-28 |
20160026432 | Method to broadcast internet voice news - A voice internet system comprises voice websites and internet broadcasting devices. When an internet broadcasting device logs into a voice website through Internet, it will broadcast voice news headlines. When a news headline is being broadcasted, pushing a play button on a control panel on the internet broadcasting device or giving a voice command, the corresponding news content under the headline will be broadcasted with voice. After completion of broadcasting the news content, the remaining headlines will be broadcasted. | 2016-01-28 |
20160026433 | SPEECH RECOGNITION INTERFACE FOR VOICE ACTUATION OF LEGACY SYSTEMS - Methods and apparatus are disclosed for a technician to access a systems interface to back-end legacy systems by voice input commands to a speech recognition module. Generally, a user logs a computer into a systems interface which permits access to back-end legacy systems. Preferably, the systems interface includes a first server with middleware for managing the protocol interface. Preferably, the systems interface includes a second server for receiving requests and generating legacy transactions. After the computer is logged-on, a request for voice input is made. A speech recognition module is launched or otherwise activated. The user inputs voice commands that are processed to convert them to commands and text that can be recognized by the client software. The client software formats the requests and forwards them to the systems interface in order to retrieve the requested information. | 2016-01-28 |
20160026434 | SYSTEM AND METHOD FOR CONTINUOUS MULTIMODAL SPEECH AND GESTURE INTERACTION - Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window. | 2016-01-28 |
20160026435 | SIMPLIFIED INVERSIONLESS BERLEKAMP-MASSEY ALGORITHM FOR BINARY BCH CODE AND CIRCUIT IMPLEMENTING THEREFOR - A simplified inversionless Berlekamp-Massey algorithm for binary BCH codes and circuit implementing the method are disclosed. The circuit includes a first register group, a second register group, a control element, an input element and a processing element. By breaking the completeness of math structure of the existing simplified inversionless Berlekamp-Massey algorithm, the amount of registers used can be reduced by two compared with conventional algorithm. Hardware complexity and operation time can be reduced. | 2016-01-28 |
20160026436 | Dynamic Multi-processing In Multi-core Processors - Aspects include computing devices, systems, and methods for implementing a pipeline multi-processing (PMP) mode on a computing device using a common FIFO unit. The computing device may use configuration information for the PMP mode to allocate FIFO components of the common FIFO unit to input write data from and output read data to specific processor cores. At least first and second processor cores may be allocated a FIFO component. The first processor core may request to input write data to the FIFO component and the second processor core may request to output the read data from the FIFO component. The allocation of the FIFO components may be static and/or dynamic. FIFO access request may be denied when the common FIFO unit is already executing a similar FIFO access request, or when the FIFO components are either full and cannot input write data or empty an cannot output read data. | 2016-01-28 |
20160026437 | APPARATUS AND METHOD FOR PERFORMING FLOATING-POINT SQUARE ROOT OPERATION - A data processing apparatus has a processing circuitry for performing a floating-point square root operation on a radicand value R to generate a result value. The processing circuitry has first square root processing circuitry for processing radicand values R which are not an exact power of two and second square root processing circuitry for processing radicand values which are an exact power of 2. Power-of-two detection circuitry detects whether the radicand value is an exact power of two and selects the output of the first or second square root processing circuitry as appropriate. This allows the result to be generated in fewer processing cycles when the radicand is a power of 2. | 2016-01-28 |
20160026438 | Cloud Storage Methods and Systems - A system receives first requests to create electronic storage objects that are accessible on the communication network or another communication network, and creates electronic storage objects responsive to the first requests. The first requests specify one or more data types to which the system should convert raw data received for storage in the electronic storage objects. The system also receives second requests to store specified data to the electronic storage objects, and stores received raw data (or converts the raw data to specified data types and stores the converted data) to the electronic storage objects responsive to the second requests. The system further receives third requests to retrieve data from specified electronic storage objects, and retrieves data from the specified electronic storage objects responsive to the third requests, the retrieved data being in specified data types. | 2016-01-28 |
20160026439 | Interactive Code Editing - Techniques for interactive code editing are described. A system can provide for display a code editing environment that resembles a text editor. Upon detecting various user inputs, the system can display, in place of text, widgets in the code editing environment. The widgets can have the appearance of text, and have functions to interact with the user to provide various conveniences including, for example, line management, step completion, calculation completion, parameter management, and code folding. | 2016-01-28 |
20160026440 | VISUALIZATION OF INFORMATION USING LANDMASSES - The present disclosure relates to the visualization of complex information using a set of navigable landmasses. A method for generating a visualization of a programming code base using a set of navigable landmasses in accordance with an embodiment of the present disclosure includes: representing each of a plurality of different code components using a respective landmass; adjusting a size of each landmass based on a number of lines of code in the code component corresponding to the landmass; and displaying the landmasses. | 2016-01-28 |
20160026441 | Recursive ontology-based systems engineering - The present disclosure proposes a new model engineering method and system that permits the creation of application systems without the need of program development. The system allows organizations to search for high performance development teams and methods, and develop high quality solutions. The present disclosure covers the three central areas of systems engineering: (1) a method for creating models which represent reality in a standardized way; (2) a procedure for transforming models into computable artifacts, that is, computer systems that behave as specified in the model; and (3) a collaborative method based in knowledge representations. | 2016-01-28 |
20160026442 | METHOD AND SYSTEM FOR HOSTING AND RUNNING USER CONFIGURED DYNAMIC APPLICATIONS THROUGH A SOFTWARE CONTAINER ON A COMPUTING DEVICE - The embodiments herein provide a method and system for hosting and running user configured dynamic applications through a software container on a computing device. The system comprises a first program component for embedding a program interpreter along with a program logic for running a dynamic application; a second program component for processing the user interactions with the dynamic applications; a third program component for interacting with several computing device capabilities to provide an interface between the dynamic applications and several computing device capabilities; and a rendering engine program for rendering the dynamic applications. The first program component is an application interpreter while the second program component is a user interactions handler and the third program component is a device-bridge. The software container automatically pools and displays a specific data from the added dynamic applications to the user based on the user request. | 2016-01-28 |
20160026443 | PROCESSING SOURCE FILE - There is provided a method for processing a source file to generate an object file, comprising: obtaining a header file referenced by the source file; in response to the source file calling a data symbol defined in the header file, creating an indicator of a definition the data symbol, wherein definitions of different data symbols correspond to different indicators; and adding the indicator into a compiling result of compiling the source file so as to generate the object file. With the present invention, a dependency between the source file and the header file can be recorded, and the number of source files needed to be re-compiled can be reduced on the basis of the dependency. | 2016-01-28 |
20160026444 | SYSTEM CONVERTER THAT IMPLEMENTS A REORDERING PROCESS THROUGH JIT (JUST IN TIME) OPTIMIZATION THAT ENSURES LOADS DO NOT DISPATCH AHEAD OF OTHER LOADS THAT ARE TO THE SAME ADDRESS - A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a converter wherein a system emulation/virtualization converter and an application code converter implement a system emulation process, and wherein the system converter implements a system and application conversion process for executing code from a guest image, wherein the system converter or the system emulator. The system further includes a reordering process through JIT (just in time) optimization that ensures loads do not dispatch ahead of other loads that are to the same address, wherein a load will check for a same address of subsequent loads from a same thread, and a thread checking process that enable other thread store checks against the entire load queue and a monitor extension. | 2016-01-28 |
20160026445 | SYSTEM CONVERTER THAT IMPLEMENTS A RUN AHEAD RUN TIME GUEST INSTRUCTION CONVERSION/DECODING PROCESS AND A PREFETCHING PROCESS WHERE GUEST CODE IS PRE-FETCHED FROM THE TARGET OF GUEST BRANCHES IN AN INSTRUCTION SEQUENCE - A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a converter wherein a system emulation/virtualization converter and an application code converter implement a system emulation process, and wherein the system converter implements a system and application conversion process for executing code from a guest image, wherein the system converter or the system emulator. The system further includes a run ahead run time guest such an conversion/decoding process, and a prefetching process where guest code is pre-fetched from the target of guest branches in an instruction sequence. | 2016-01-28 |
20160026446 | CONTEXT-FREE TYPE RECORDING FOR VIRTUAL MACHINES OF DYNAMIC PROGRAMMING LANGUAGES - A method and a computing device for reducing deoptimization in a virtual machine are provided. Source code of a dynamically-typed program is compiled. A context-free type-state recorder records a first data type of a value associated with a particular named memory location within the source code. Optimized code may be generated based on the first data type of the value being a matching data type for global values associated with the particular named memory location. One or more global values associated with the particular named memory location may be type-checked. The context-free type-state recorder may record, if one or more of the global values associated with the particular named memory location is a different data type than the first data type, one or more different data types associated with the particular named memory location. New optimized code may then be generated. | 2016-01-28 |
20160026447 | CONSOLE APPLICATION THROUGH WEB SERVICE - A method performed by a server computing system includes generating an operation for a web service, the operation corresponding to at least one main method of a console application, receiving input data from a client device through the operation, writing the input data to a memory store, executing the console application using the input data of the memory store, injecting code into the console application, the code to change input/output streams from a console input/output to the memory store. The method further includes writing an output of the console application to the memory store. | 2016-01-28 |
20160026448 | Identifying Unmatched Registry Entries - A method and a related system for identifying unmatched registry entries may be provided. The method may comprise scanning a file system and discovering software based on a file signature, collecting first attributes of the discovered software, collecting native registry entries, and comparing the first attributes against second attributes of the collected registry entries based on a filtering rule. Thereby, the registry entries may be grouped into two groups. One group may represent matched registry entries and the other group may represent unmatched registry entries. The unmatched registry entries may be identified as unequivocal entries for further software discovery. | 2016-01-28 |
20160026449 | Software Discovery in an Environment with Heterogeneous Machine Groups - A mechanism is provided for software discovery in an environment with heterogeneous machine groups may be provided. A group comprising computing systems that have similar software program installations is defined. A first scan procedure is performed by scanning each computing system of the group using a first software signature catalogue to identify installed programs. Software signatures of identified installed programs are added to a base installation software catalogue. A second scan procedure is performed by scanning the group of computing systems using the base installation software catalogue to identify installed software programs. | 2016-01-28 |
20160026450 | PROGRESS TRACKING SYSTEM AND METHOD - A method for synchronization and notification of any online posting and/or uploading of data by an administrator to a computer running a website to a mobile device of a particular user without the need of an external push notification service comprises the steps of using a web-to-app connectivity technology to directly synchronize the posting and the data to the mobile device of the particular user and using the web-to-app connectivity technology to directly sent a notification to the particular user when a new posting or new uploading of data associated with that particular user was posted or uploaded. | 2016-01-28 |
20160026451 | Automated Operating System Installation on Multiple Drives - Technologies are provided herein for automated operating system installation on multiple drives. A device switch connects a mass storage device to a test control system (“TCS”) or a system under test (“SUT”). When connected to the TCS, the mass storage device is mounted with a disk image containing an installer program for an operating system. When the mass storage device is connected to the SUT, the installer program is executed to install the operating system onto an activated drive connected to the SUT. Multiple operating systems can be installed in a similar fashion by mounting a corresponding disk image for an operating system onto the mass storage device and by installing from the mass storage device the operating system onto a corresponding drive connected to the SUT. Errors generated during the automated installation process can be analyzed and utilized to identify and correct errors in a computing system firmware. | 2016-01-28 |
20160026452 | EFFICIENT DEPLOYMENT OF APPLICATION REVISIONS AND IMPLEMENTATION OF APPLICATION ROLLBACKS ACROSS MULTIPLE APPLICATION SERVERS - The deployment of application revisions and performing of application rollbacks across multiple application servers is streamlined by reducing the number of files that are communicated to the application servers to perform updates and rollbacks. An application service is provided by multiple application servers each executing a plurality of compiled code files associated with the application service. Each application server receives a compiled code file corresponding to an update for one of the plurality of compiled code files associated with the application service. The one compiled code file is replaced with the received compiled code file corresponding to the update. The application servers then provide an updated version of the application service by executing the plurality of compiled code files including the replacement compiled code file corresponding to the update. Application rollback is performed using compiled code files stored in a local repository of each application server. | 2016-01-28 |
20160026453 | PATCH PROCESS ENSURING HIGH AVAILABILITY OF CLOUD APPLICATION - A cyclical patching process associated with a cloud application may be defined to ensure high availability (HA) of the cloud application in order to prevent impacting an availability to end users. A list of server identities corresponding to one or more servers of a datacenter hosting the cloud application may be accepted. HA metric values for each of the server identities may be determined in order to compute an overall HA metric value for the cloud application. A subset of the servers may be removed from a rotation framework of the cloud application based on the determined HA metric values, where the removal does not affect the overall HA metric value of the cloud application. One or more patches may be applied to each server within the subset of servers in parallel, and the subset of servers may be reinstated in the rotation framework of the cloud application. | 2016-01-28 |
20160026454 | CIRCUIT AND METHOD FOR WRITING PROGRAM CODES OF BASIC INPUT/OUTPUT SYSTEM - A circuit for writing program codes of a basic input/output system (BIOS) is connected to a main circuit and a BIOS. The BIOS stores a BIOS program code and has a BIOS identifier. The circuit includes a data source connection interface, a judgment trigger module and a write module. The data source connection interface stores an available BIOS program code matching with the BIOS. The judgment trigger module judges whether the BIOS is capable of, after a power-on initialization phase starts, completing loading the BIOS program code and performing system initialization in a preset time. If the BIOS is incapable of completing loading the BIOS program code and performing system initialization within the preset time, the judgment trigger module sends a trigger signal. After receiving the trigger signal, the write module downloads the available BIOS program codes according to the BIOS identifier, and writes to the BIOS, to overwrite the current data. | 2016-01-28 |
20160026455 | SYSTEMS AND METHODS FOR MANAGING FILES IN A CLOUD-BASED COMPUTING ENVIRONMENT - In one embodiment, a method for collecting updates for a plurality of objects over a cloud data network includes: determining a set of remote devices known to have updates for a selected object, wherein each of said remote devices maintains a set of locally updated objects that includes the selected object; and downloading the updates for the selected object from said set of remote devices. Where said downloading the updates for the selected object results in a name conflict, the method further includes resolving said name conflict, wherein said resolving includes selecting said selected object as a target and said existing object as an alias having a pointer relationship to the target; and merging all meta-data of the alias object into the target. | 2016-01-28 |
20160026456 | REMOTE MANAGEMENT OF ELECTRONIC PRODUCTS - A remote server may receive a data log with information regarding the status and/or a setting of an electronic product. The remote server may store information in the data log in a database. The remote server may process information in the database to determine whether a newer version of firmware is available for the electronic product is available. | 2016-01-28 |
20160026457 | AUTOMATED DEPLOYMENT AND SERVICING OF DISTRIBUTED APPLICATIONS - Deployment and servicing tasks associated with multi-tier, distributed applications, application environments and data centers are automated so that a person does not have to manually perform these tasks. All of the information describing and defining the distributed service is modeled and stored in a re-useable service template that can be used to drive an automated system to programmatically deploy and manage the service over time. Deployment and servicing of a distributed application can be automated using re-useable models that capture hardware and workload definitions. The re-useable models in the form of service templates enable delta-based servicing of the application. The service can be deployed to one or more physical machines, one or more virtual machines or to a combination thereof. A default deployment plan can be customized with instance-specific customizations of service parameters. | 2016-01-28 |
20160026458 | A METHOD, MEDIUM, AND APPARATUS FOR RE-PROGRAMMING FLASH MEMORY OF A COMPUTING DEVICE - A method of re-programming flash memory of a computing device is presented here. Software content having a plurality of software modules can be re-programmed by identifying, from the software modules, a first set of software modules to be programmed by delta programming and a second set of software modules to be programmed by non-delta programming. A first set of sectors of the flash memory is assigned for programming the first set of software modules, and a second set of sectors is assigned for programming the second set of software modules. At least some of the second set of sectors are designated as temporary backup memory space. The first set of sectors is programmed with the first set of software modules, using delta programming and the designated temporary backup memory space. After programming the first set of sectors, the second set of sectors is programmed with the second set of software modules, using non-delta programming. | 2016-01-28 |
20160026459 | DEVICE AND METHOD FOR UPDATING FIRMWARE OF A RACKMOUNT SERVER SYSTEM - A method for updating firmware of a rackmount server system is disclosed herein and is suited to update a firmware of an update-needed chip module by a control chip module. The method includes the following steps of: sending a plurality of packets constituting a renewed firmware to the update-needed chip module by the control chip module; sending a command of verifying the plurality of packets to the update-needed chip module by the control chip module; acquiring a plurality of verification messages, corresponding to the plurality of packets, from the update-needed chip module by the control chip module; and determining whether all of the plurality of verification messages corresponding to the plurality of packets are correct or not by the control chip module, thereby making sure whether the plurality of packets received by the update-needed chip module are incorrect or not. | 2016-01-28 |
20160026460 | Device and Method for Upgrading Data Terminal - Provided are a device and method for upgrading a data terminal. The device includes a dialing component, a protocol component, a DHCP Dynamic Host Configuration Protocol (DHCP) server component, a router component, an Internet Protocol (IP) processing component, an upgrading component and a web server component, wherein the dialling component implements a dialling flow; and the upgrading component acquires a private IP address from the DHCP server component, sends a request message of detecting whether there is a new version to a version server through the router component and the protocol component, and if there is the new version, downloads the new version from the version server and writes the new version into a flash of a data terminal, and then the data terminal is automatically restarted to finish upgrading. According to the technical solution, an upgrading process of the data terminal under a win8 operating system, a web server access process and a network access process of a Personal Computer (PC) are independently implemented, relevance between the upgrading process of the data terminal and an operating system of the PC is reduced, Microsoft win8 logo authentication can be passed, and a driver-free function may further be realized. | 2016-01-28 |
20160026461 | SYSTEMS AND METHODS FOR AUTOMATIC API DOCUMENTATION - Systems, methods, and articles of manufacture provide for automatic API documentation. | 2016-01-28 |
20160026462 | APPLICATION WRAPPING FOR APPLICATION MANAGEMENT FRAMEWORK - Methods and systems for developing, modifying, and distributing software applications for enterprise systems are described herein. A software component, such as a native mobile application or a template application, may be modified into a managed mobile application, and metadata associated with the managed mobile application may be generated. The managed application and associated metadata may be provided to one or more application stores, such as public application stores and/or enterprise application stores. Managed applications and/or associated metadata may be retrieved by computing devices from public application stores and/or enterprise application stores, and may be executed as managed applications in an enterprise system. | 2016-01-28 |
20160026463 | ZERO CYCLE MOVE USING FREE LIST COUNTS - A system and method for reducing the latency of data move operations. A register rename unit within a processor determines whether a decoded move instruction qualifies for a zero cycle move operation. If so, control logic assigns a physical register identifier associated with a source operand of the move instruction to the destination operand of the move instruction. Additionally, the register rename unit marks the given move instruction to prevent it from proceeding in the processor pipeline. Further maintenance of the particular physical register identifier may be done by the register rename unit during commit of the given move instruction. | 2016-01-28 |
20160026464 | Programmable Counters for Counting Floating-Point Operations in SIMD Processors - A processor includes one or more execution units to execute instructions, each having one or more elements in different element sizes using one or more registers in different register sizes. The processor further includes a counter configured to count a number of instructions performing predetermined types of operations executed by the one or more execution units. The processor further includes one or more registers to allow an external component to configure the counter to count a number of instructions associated with a combination of a register size and a element size (register/element size) and to retrieve a counter value produced by the counter. | 2016-01-28 |
20160026465 | DATA PROCESSING APPARATUS AND METHOD - A data processing apparatus comprises a processing circuit and instruction decoder. A bitfield manipulation instruction controls the processing apparatus to generate at least one result data element from corresponding first and second source data elements. Each result data element includes a portion corresponding to a bitfield of the corresponding first source data element. Bits of the result data element that are more significant than the inserted bitfield have a prefix value that is selected, based on a control value specified by the instruction, as one of a first prefix value having a zero value, a second prefix value having the value of a portion of the corresponding second source data element, and a third prefix value corresponding to a sign extension of the bitfield of the first source data element. | 2016-01-28 |
20160026466 | INSTRUCTION SET FOR SUPPORTING WIDE SCALAR PATTERN MATCHES - A processor includes an instruction decoder to receive an instruction having a first operand, a second operand, and a third operand, and an execution unit coupled to the instruction decoder to execute the instruction, the execution unit to individually perform a shift operation by at least one bit for each of a plurality of data elements stored in a storage location indicated by the second operand, for each of the data elements that has an overflow in response to the shift-left operation, to carry over the overflow into an adjacent data element based on a first bitmask obtained from the third operand, generating a final result, and to store the final result in a storage location indicated by the first operand. | 2016-01-28 |
20160026467 | INSTRUCTION AND LOGIC FOR EXECUTING INSTRUCTIONS OF MULTIPLE-WIDTHS - A processor an execution unit, a decoder, an operation width tracker, and an allocator. The decoder includes logic to decode a received instruction. The operation width tracker includes logic to track a state indicating a currently used width of one or more registers of the processor. The allocator includes logic to selectively blend the instruction with a higher number of bits based upon a width of the instruction and the state. The execution unit may include logic to execute the selectively blended instructions. | 2016-01-28 |
20160026468 | SM4 ACCELERATION PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS - A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode an instruction. The instruction is to indicate one or more source packed data operands. The one or more source packed data operands are to have four 32-bit results of four prior SM4 cryptographic rounds, and four 32-bit values. The processor also includes an execution unit coupled with the decode unit and the plurality of the packed data registers. The execution unit, in response to the instruction, is to store four 32-bit results of four immediately subsequent and sequential SM4 cryptographic rounds in a destination storage location that is to be indicated by the instruction. | 2016-01-28 |
20160026469 | DATA CACHE SYSTEM AND METHOD - A data cache system is provided. The system includes a central processing unit (CPU), a memory system, an instruction track table, a tracker and a data engine. The CPU is configured to execute instructions and read data. The memory system is configured to store the instructions and the data. The instruction track table is configured to store corresponding information of branch instructions stored in the memory system. The tracker is configured to point to a first data read instruction after an instruction currently being executed by the CPU. The data engine is configured to calculate a data address in advance before the CPU executes the data read instruction pointed to by the tracker. Further, the data engine is also configured to control the memory system to provide the corresponding data for the CPU based on the data address. | 2016-01-28 |
20160026470 | Conditional Branch Prediction Using a Long History - Methods and conditional branch predictors for predicting an outcome of a conditional branch instruction in a program executed by a processor using a long conditional branch history include generating a first index from a first portion of the conditional branch history and a second index from a second portion of the conditional branch history. The first index is then used to identify an entry in a first pattern history table including first prediction information; and the second index is used to identify an entry in a second pattern history table including second prediction information. The outcome of the conditional branch is predicted based on the first and second prediction information. | 2016-01-28 |
20160026471 | SEMICONDUCTOR DEVICE AND METHOD OF OPERATING THE SAME - A semiconductor device includes one or more internal circuits; a nonvolatile memory circuit including a first region suitable for storing first data for the nonvolatile memory circuit and a second region suitable for storing second data for the internal circuits; a first register suitable for temporarily storing the first data; one or more second registers suitable for temporarily storing the second data; and a control circuit suitable for controlling the nonvolatile memory circuit to transmit the first data and the second data to the first register and to the second registers, respectively, when a boot-up operation is performed. | 2016-01-28 |
20160026472 | METHOD FOR IMPLEMENTING "INSTANT BOOT" IN A CUSTOMIZABLE SOC - A method for implementing an instant boot function in a customizable system on a chip (SoC) integrated circuit having an application specific integrated circuit portion including configuration registers includes providing a field programmable gate array fabric on the SoC, providing non-volatile memory cells on the SoC, and initializing the configuration registers using data from the non-volatile memory cells during a system reset mode of operation of the integrated circuit. | 2016-01-28 |
20160026473 | SYSTEM MANAGEMENT CONTROLLER - For system management applied to a computer system, a power supply of the computer system starts to power a motherboard and a CPU thereon. A reset holding module in a system management controller holds the CPU in a Power-on Reset (PoR) state. The system management controller executes an operation requested by a user. The reset holding module releases the CPU from the PoR state in response to the system management controller completing the operation. | 2016-01-28 |
20160026474 | CACHING BASED OPERATING SYSTEM INSTALLATION - An image of system software is installed by loading an executable image of the system software using a boot loader, where the executable image includes a kernel and a plurality of files used by the kernel. The kernel of the system software is executed to generate the image of the system software that includes a copy of the kernel. Generating the image of the system software involves the steps of generating a plurality of pointers that each point to a different one of the files, retrieving the files using the pointers, and storing a copy of the kernel and the files in a storage device from which the system software is to be booted as the image of the system software. | 2016-01-28 |
20160026475 | Virtualized Boot Block with Discovery Volume - A file system independent virtualized boot block with discovery volume and cover files renders a volume visible when accessed by an accessing system which differs from a source system. For example, a downlevel operating system recognizes that data is present on a volume created in an uplevel operating system, even where the uplevel data itself may not be accessible. | 2016-01-28 |
20160026476 | USB COMMUNICATIONS TUNNELING THROUGH USB PRINTER DEVICE CLASS - A USB tunnel apparatus is disclosed herein. In various aspects, the USB tunnel apparatus may include a USB printer class interface operatively received by an application specific USB peripheral. The USB printer class interface is configured to identify the application specific USB peripheral as a printer class device to the host during Plug and Play enumeration, and the USB printer class interface is configured to generate a response during Plug and Play enumeration that alters the process of PnP enumeration to create a partially instantiated printer driver stack on the host when the application specific USB peripheral is in USB communication with the host, in various aspects. Related methods and compositions of matter are also disclosed. This Abstract is presented to meet requirements of 37 C.F.R. §1.72(b) only. This Abstract is not intended to identify key elements of the apparatus, methods, and compositions of matter disclosed herein or to delineate the scope thereof. | 2016-01-28 |
20160026477 | OUT-OF-BAND RETRIEVAL OF NETWORK INTERFACE CONTROLLER INFORMATION - In some implementations, network interface controller (NIC) configuration information can be obtained from a NIC prior to booting up an operating system. For example, a Basic Input Output System (BIOS) can obtain the NIC configuration information from the NIC during the execution of a system check (e.g., Power-On Self-Test). A system controller can receive the NIC configuration information from the BIOS. The system controller can store the NIC configuration information in memory associated with the system controller. A management system can request the NIC configuration information from the system controller using an out-of-band communication channel. For example, the management system can send the request for NIC configuration information to the system controller prior to powering on a server using a dedicated network interface of the system controller. | 2016-01-28 |
20160026478 | Converting Desktop Applications to Web Applications - Technologies are described herein for converting a desktop application to a web application. An interface file is generated based on a user interface of the desktop application. A logic file is generated based on application executables of the desktop application. A data model is generated based on application data and states of the desktop application. The web application is generated based on the interface file, the logic file, and the data model. | 2016-01-28 |
20160026479 | METHOD AND APPARATUS FOR SELECTING AN INTERCONNECT FREQUENCY IN A COMPUTING SYSTEM - In an embodiment, a processor includes at least one core and an interconnect that couples the at least one core and the cache memory. The interconnect is to operate at an interconnect frequency (f | 2016-01-28 |
20160026480 | SUBSTRATE PROCESSING SYSTEM, STORAGE MEDIUM AND METHOD OF REGISTERING NEW DEVICE - A substrate processing system includes a main control unit having a configuration file in which ID information and detail information about devices for processing a substrate is recorded, the detail information includes information needed for controlling the devices, and a module controller having a list file obtained by converting the configuration file into a readable form, the module controller controlling the devices described in the list file on the basis of instructions from the main control unit. The module controller automatically adds, to the list file, ID information and detail information about a new device newly connected to the module controller to establish a condition under which the new device can be controlled. | 2016-01-28 |
20160026481 | SYSTEM AND METHOD FOR DEPLOYING A DATA-PATH-RELATED PLUG-IN FOR A LOGICAL STORAGE ENTITY OF STORAGE SYSTEM - A method for deploying a data-path-related plug-in for a logical storage entity of a storage system, the method comprising: deploying the data-path-related plug-in for the logical storage entity, wherein the deploying includes creating a plug-in inclusive data-path specification and wherein the plug-in inclusive data-path specification includes operation of the data-path-related plug-in; and creating a verification data-path specification, wherein the verification data-path specification does not include operation of the data-path-related plug-in and wherein a task executed in a verification data path, having the verification data-path specification, generates verification data that enables validation of given data generated by the task being executed in a plug-in inclusive data-path having the plug-in inclusive data-path specification. | 2016-01-28 |
20160026482 | USING A PLURALITY OF CONVERSION TABLES TO IMPLEMENT AN INSTRUCTION SET AGNOSTIC RUNTIME ARCHITECTURE - A system for an agnostic runtime architecture is disclosed. The system includes a system emulation/virtualization converter, an application code converter, and a system converter wherein the system emulation/virtualization converter and the application code converter implement a system emulation process, and wherein the system converter implements a system conversion process for executing code from a guest image. The system converter further comprises a guest fetch logic component for accessing a plurality of guest instructions, a guest fetch buffer coupled to the guest fetch logic component and a branch prediction component for assembling the plurality of guest instructions into a guest instruction block, and a plurality of conversion tables including a first level conversion table and a second level conversion table coupled to the guest fetch buffer for translating the guest instruction block into a corresponding native conversion block. The system further includes a native cache coupled to the conversion tables for storing the corresponding native conversion block, a conversion look aside buffer coupled to the native cache for storing a mapping of the guest instruction block to corresponding native conversion block. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates the guest instruction has a corresponding converted native instruction in the native cache, and in response to the hit the conversion look aside buffer forwards the translated native instruction for execution. | 2016-01-28 |
20160026483 | SYSTEM FOR AN INSTRUCTION SET AGNOSTIC RUNTIME ARCHITECTURE - A system for an agnostic runtime architecture. The system includes a close to bare metal JIT conversion layer, a runtime native instruction assembly component included within the conversion layer for receiving instructions from a guest virtual machine, and a runtime native instruction sequence formation component included within the conversion layer for receiving instructions from native code. The system further includes a dynamic sequence block-based instruction mapping component included within the conversion layer for code cache allocation and metadata creation, and is coupled to receive inputs from the runtime native instruction assembly component and the runtime native instruction sequence formation component, and wherein the dynamic sequence block-based instruction mapping component receives resulting processed instructions from the runtime native instruction assembly component and the runtime native instruction sequence formation component and allocates the resulting processed instructions to a processor for execution. | 2016-01-28 |
20160026484 | SYSTEM CONVERTER THAT EXECUTES A JUST IN TIME OPTIMIZER FOR EXECUTING CODE FROM A GUEST IMAGE - A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a system converter wherein the system emulation/virtualization converter and the application code converter implement a system emulation process, and wherein the system converter implements a system conversion process for executing code from a guest image. The system converter executes a JIT optimizer, and wherein the JIT optimizer ensures loads are not dispatch ahead of other loads that are to a same memory address by checking for the same address from subsequent loads from a same thread. | 2016-01-28 |
20160026485 | SYSTEM AND METHOD OF LOADING VIRTUAL MACHINES - A system and method is provided of swapping a first virtual machine for a second virtual machine by modifying those portions of memory where the two machines differ. | 2016-01-28 |
20160026486 | AN ALLOCATION AND ISSUE STAGE FOR REORDERING A MICROINSTRUCTION SEQUENCE INTO AN OPTIMIZED MICROINSTRUCTION SEQUENCE TO IMPLEMENT AN INSTRUCTION SET AGNOSTIC RUNTIME ARCHITECTURE - A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a system converter wherein the system emulation/virtualization converter and the application code converter implement a system emulation process, and wherein the system converter implements a system conversion process for executing code from a guest image. The system converter further comprises an instruction fetch component for fetching an incoming microinstruction sequence, a decoding component coupled to the instruction fetch component to receive the fetched macro instruction sequence and decode into a microinstruction sequence, and an allocation and issue stage coupled to the decoding component to receive the microinstruction sequence perform optimization processing by reordering the microinstruction sequence into an optimized microinstruction sequence comprising a plurality of dependent code groups. A microprocessor pipeline is coupled to the allocation and issue stage to receive and execute the optimized microinstruction sequence. A sequence cache is coupled to the allocation and issue stage to receive and store a copy of the optimized microinstruction sequence for subsequent use upon a subsequent hit on the optimized microinstruction sequence, and a hardware component is coupled for moving instructions in the incoming microinstruction sequence. | 2016-01-28 |
20160026487 | USING A CONVERSION LOOK ASIDE BUFFER TO IMPLEMENT AN INSTRUCTION SET AGNOSTIC RUNTIME ARCHITECTURE - A system for an agnostic runtime architecture. The system includes a system emulation/virtualization converter, an application code converter, and a converter, wherein a system emulation/virtualization converter and an application code converter implement a system emulation process. The system converter implements a system and application conversion process for executing code from a guest image, wherein the system converter or the system emulator accesses a plurality of guest instructions that comprise multiple guest branch instructions, and assembles the plurality of guest instructions into a guest instruction block. The system converter also translates the guest instruction block into a corresponding native conversion block, stores the native conversion block into a native cache, and stores a mapping of the guest instruction block to corresponding native conversion block in a conversion look aside buffer. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates the guest instruction has a corresponding converted native instruction in the native cache, and forwards the converted native instruction for execution in response to the hit. | 2016-01-28 |
20160026488 | INSTRUCTION SET EMULATION FOR GUEST OPERATING SYSTEMS - The described implementations relate to virtual computing techniques. One implementation provides a technique that can include receiving a request to execute an application. The application can include first application instructions from a guest instruction set architecture. The technique can also include loading an emulator and a guest operating system into an execution context with the application. The emulator can translate the first application instructions into second application instructions from a host instruction set architecture. The technique can also include running the application by executing the second application instructions. | 2016-01-28 |
20160026489 | LIVE MIGRATION OF VIRTUAL MACHINES THAT USE EXTERNALIZED MEMORY PAGES - A method includes running a Virtual Machine (VM) on a first compute node in a plurality of compute nodes that communicate with one another over a communication network. The VM is migrated from the first compute node to a second compute node in the plurality by generating, for memory pages accessed by the VM, page transfer state of one or more local memory pages that are accessed locally on the first compute node, and of one or more externalized memory pages whose access is not confined to the first node. Based on the page transfer state, the migrated VM is provided with access to the memory pages, including both the local and the externalized memory pages, on the second compute node. | 2016-01-28 |
20160026490 | HYPERVISOR AND PHYSICAL MACHINE AND RESPECTIVE METHODS THEREIN FOR PERFORMANCE MEASUREMENT - A method performed by a hypervisor executing a virtual machine for enabling a performance measurement between the virtual machine and a peer node, and a method performed by a physical machine comprising the hypervisor are provided. The method performed by the hypervisor comprises intercepting a packet transmitted from, or destined to, the virtual machine, the packet comprising a destination address to the virtual machine or to the peer node, and determining whether to insert a hypervisor time stamp or not in the packet. The method further comprises, when it is determined to insert the hypervisor time stamp in the packet, inserting a hypervisor time stamp in the packet, and forwarding the packet to its destination according to the destination address. | 2016-01-28 |
20160026491 | APPARATUS AND METHOD FOR LEVERAGING SEMI-SUPERVISED MACHINE LEARNING FOR SELF-ADJUSTING POLICIES IN MANAGEMENT OF A COMPUTER INFRASTRUCTURE - Embodiments relate to a method for managing and analyzing a computer environment. The method includes receiving, by the host device, a set of data elements from at least one computer environment resource of the computer infrastructure, each data element of the set of data elements relating to an attribute of the at least one computer environment resource. The method includes applying a system analysis function to the set of data elements to characterize a dataset specification associated with the set of data elements. The method includes receiving, by the host device, a user-selected policy threshold criterion based on the dataset specification and providing the user-selected policy threshold criterion to the semi-supervised learning algorithm as a parameter. The method includes adjusting a boundary of the dataset specification of the set of data elements, as associated with the user-selected policy threshold criterion, based on a behavioral change of the computer infrastructure. | 2016-01-28 |
20160026492 | ELECTRONIC APPARATUS FOR EXECUTING VIRTUAL MACHINE AND METHOD FOR EXECUTING VIRTUAL MACHINE - A method for executing a virtual machine (VM) in an electronic device is provided. The method includes obtaining a position of a first base disk image stored in a disk image storage, creating a root disk image that backs the first base disk image based on the obtained position, and executing the VM based on the created root disk image. The method further includes, in the run-time of the VM, changing the first base disk to the second base disk, and continuing the VM based on the merged root disk. | 2016-01-28 |
20160026493 | PLANNED VIRTUAL MACHINES - A planned virtual machine, for use in staging the construction of a virtual machine. Such a planned virtual machine may be used as part of a method for migrating virtual machines. The method may include creating a planned virtual machine based on a first realized virtual machine or a template, performing a configuration operation on the planned virtual machine, and converting the planned virtual machine to a second realized virtual machine. The configuration operation may comprise interaction with a virtualization platform managing the planned virtual machine and may be based on input provided by a user. | 2016-01-28 |
20160026494 | MID-THREAD PRE-EMPTION WITH SOFTWARE ASSISTED CONTEXT SWITCH - Methods and apparatus relating to mid-thread pre-emption with software assisted context switch are described. In an embodiment, one or more threads executing on a Graphics Processing Unit (GPU) are stopped at an instruction level granularity in response to a request to pre-empt the one or more threads. The context data of the one or more threads is copied to memory in response to completion of the one or more threads at the instruction level granularity and/or one or more instructions. Other embodiments are also disclosed and claimed. | 2016-01-28 |
20160026495 | EVENT PROCESSING SYSTEMS AND METHODS - An event processing system includes a multi-agent based system, which includes a core engine configured to define and deploy a plurality of agents configured to perform a first set of programmable tasks defined by one or more users. The first set of tasks operates with real time data. The multi-agent based system also includes a monitoring engine configured to monitor a lifecycle of the agents, communication amongst the agents and processing time of the tasks. The multi-agent based system further includes a computing engine coupled to the core engine and configured to execute the first set of tasks. The event processing system includes a batch processing system configured to enable deployment of a second set of programmable tasks that operates with non-real time data and a studio coupled to the multi-agent based system and configured to enable users to manage the multi-agent based system and the batch processing system. | 2016-01-28 |
20160026496 | METHOD AND SYSTEM FOR PROFILING VIRTUAL APPLICATION RESOURCE UTILIZATION PATTERNS - A method and system for profiling execution of an application implemented by an application file comprising a plurality of data blocks. The application is executed in response to an execute command from a management process. Read messages are sent to the management process each time the application reads one or more of the plurality of data blocks of the application file. The management process records information about the read operations in one or more transcripts which may be used to create a streaming model for the application allowing the application to be downloaded using a conventional download protocol without using a specialized streaming protocol. | 2016-01-28 |
20160026497 | Effective Roaming for Software-as-a-Service Infrastructure - A method for providing a roaming service to a first client may be provided. The first client may be associated to at least one application service running on an associated virtual machine as a Cloud service via a primary route between the first client and the at least one application service. The method may comprise providing a first agent on the first client, and providing an alternative route to the primary route between the first client and the at least one application service utilizing a second agent running on a second client. Thereby, the alternative route is based on a set of preferences submitted by the first client. | 2016-01-28 |
20160026498 | POWER MANAGEMENT SYSTEM, SYSTEM-ON-CHIP INCLUDING THE SAME AND MOBILE DEVICE INCLUDING THE SAME - A power management system controlling power for a plurality of functional blocks included in a system-on-chip includes a plurality of programmable nano controllers, an instruction memory and a signal map memory. The instruction memory is shared by the nano controllers and stores a plurality of instructions that are used by the nano controllers. The signal map memory is shared by the nano controllers and stores a plurality of signals that are provided to the functional blocks and are controlled by the nano controllers. A first nano controller among the plurality of nano controllers is programmed as a central sequencer. Second through n-th nano controllers among the plurality of nano controllers are programmed as first sub-sequencers that are dependent on the first nano controller. | 2016-01-28 |
20160026499 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR ADAPTIVE SELF-ORGANIZING SERVICE FOR ONLINE TASKS - Provided are systems, methods and computer program products. Embodiments may include methods that include receiving a query that includes multiple requests, each including target data and corresponding to different respective attributes of the query, and selectively and iteratively executing a portion of multiple elemental computer programs responsive to different ones of the requests. Ones of the elemental computer programs are configured to be executed to provide a portion of target values corresponding to respective ones of the requests. More than one of the elemental computer programs are executed to provide, in aggregate, target values corresponding to the target data. | 2016-01-28 |
20160026500 | System and Method of Providing System Jobs Within a Compute Environment - The disclosure relates to systems, methods and computer-readable media for using system jobs for performing actions outside the constraints of batch compute jobs submitted to a compute environment such as a cluster or a grid. The method for modifying a compute environment from a system job disclosure associating a system job to a queuable object, triggering the system job based on an event and performing arbitrary actions on resources outside of compute nodes in the compute environment. The queuable objects include objects such as batch compute jobs or job reservations. The events that trigger the system job may be time driven, such as ten minutes prior to completion of the batch compute job, or dependent on other actions associated with other system jobs. The system jobs may be utilized also to perform rolling maintenance on a node by node basis. | 2016-01-28 |
20160026501 | MANAGING PROVISIONING OF STORAGE RESOURCES - A method is used in managing provisioning of storage resources. An access is provided to a provisioning decision making service configured to derive a storage provisioning decision based on information provided to the provisioning decision making service. Provisioning of storage resources is enabled on a storage system over a communication medium by using the provisioning decision making service. | 2016-01-28 |
20160026502 | MUTABLE CHRONOLOGIES FOR ACCOMMODATION OF RANDOMLY OCCURRING EVENT DELAYS - Causing a computing system to process events from a sequence of events that defines a correct order for said events independent from an order in which those events are received includes: defining a first variable, defining, for the first variable, a chronology of operations on the first variable associated with received events, receiving a first event that pertains to the first variable, executing a first operation on said first variable that results in a first update of the chronology, receiving a delayed event that pertains to the first variable, executing a second operation on said first variable that results in a second update of the chronology, and determining whether the first update is valid or invalid, wherein the delayed event precedes the first event in the sequence, the first update is based on the first event, and the second update is based on the delayed event. | 2016-01-28 |
20160026503 | METHOD AND APPARATUS FOR IMPROVING APPLICATION PROCESSING SPEED IN DIGITAL DEVICE - A method and apparatus for improving application processing speed in a digital device which improve application processing speed for a digital device running in an embedded environment where processor performance may not be sufficiently powerful by detecting an execution request for an application, identifying a group to which the requested application belongs, among preset groups with different priorities and scheduling the requested application according to the priority assigned to the identified group, and executing the requested application based on the scheduling result. | 2016-01-28 |
20160026504 | ASYNCHRONOUS DISPATCHER FOR APPLICATION FRAMEWORK - The described technology is directed towards an asynchronous dispatcher including control logic that manages a queue set, including to dequeue and execute work items from the queue on behalf of application code executing in a program. The dispatcher yields control to the program to allow the program and application code to be responsive with respect to user interface operations. | 2016-01-28 |
20160026505 | LOAD BALANCING FOR SINGLE-ADDRESS TENANTS - When a load balancer detects that a virtual address is associated with a single destination address, the load balancer sets a flag to distinguish the virtual address from virtual addresses that are associated with a plurality of destination addresses. The load balancer instructs the router to bypass the load balancer for network packets that are addressed to the virtual address, and refrains from storing subsequent flow state for the virtual address. When the virtual address is to be scaled up with an additional destination address, the load balancer sets a flag to distinguish the virtual address from virtual addresses that are associated with a single destination addresses. The load balancer instructs the router to route network packets that are addressed to the virtual address through the load balancer, instead of bypassing the load balancer, and starts storing flow state for the virtual address. | 2016-01-28 |
20160026506 | System and method for managing excessive distribution of memory - Disclosed are a system and a method for managing excessive distribution of memory. Based on a page sharing technology, the software types of virtual machines running in respective servers in a cluster are collected, and the virtual machines with similar running software types are migrated from a server to a specified server, so that the page sharing effect of the virtual machines and the excessive distribution effect of memory are better, the bearing capability of the servers in the system is ensured not to be wasted, the utilization rates of memory and resources of the whole system are combined optimally, and the memory of the whole cluster system is distributed better; moreover, a fewer number of servers run, the energy and the running cost are saved, less pressure is caused to the environment, the emission of carbon dioxide is reduced, therefore, the disclosure has a great social effect and economic effect. | 2016-01-28 |
20160026507 | POWER AWARE TASK SCHEDULING ON MULTI-PROCESSOR SYSTEMS - Methods and apparatus for power-based scheduling of tasks among processors are disclosed. A method may include executing processor executable code on one or more of the processors to prompt a plurality of executable tasks for scheduling among the processors. Processor-demand information is obtained about the plurality of executable tasks in addition to capacity information for each of the processors. Processor power information for each of the processors is also obtained, and the plurality of executable tasks are scheduled on the lowest power processors where processor-demands of the tasks are satisfied. | 2016-01-28 |
20160026508 | INTERACTIVE ELECTRONIC BOOKS - There is disclosed a method of presenting content, comprising: a first application facilitating the display of a set of content, at least part of the content being associated with a link to a second application, wherein on selection of the link, the second application is enabled and content associated with the second application is displayed. | 2016-01-28 |
20160026509 | METHOD AND SYSTEM FOR IMPROVING STARTUP PERFORMANCE AND INTEROPERABILITY OF A VIRTUAL APPLICATION - A data structure including simple and complex objects. Each simple object includes a content type indicator, a size indicator, and one or more simple data types. Each complex object includes a content type indicator, a size indicator, and one or more child objects. The complex objects include a layer object having first and second child objects. The first child object is a collection of complex objects storing information for configuring a virtual filesystem of a virtual application at application startup. The second child object is a collection of complex objects storing information for configuring a virtual registry of the virtual application at application startup. Reading of selected simple and complex objects may be deferred at startup based on the content type indicator. Deferred objects may be read after startup when access to information stored by the deferred object is request by the virtual application. | 2016-01-28 |
20160026510 | STRUCTURED LOGGING SYSTEM - The described technology is directed towards a structured logging technology in which events corresponding to program execution are received in a structured format and logged based upon filtering of those events. A log handler is associated with a filtering mechanism that determines whether each event matches filtering criteria and is thus to be logged by the log handler. The log handler provides matching logged events to an event sink, such as an analytic tool that consumes the events for analysis. | 2016-01-28 |
20160026511 | METHOD, APPARATUS AND SYSTEM FOR ACQUIRING INPUT EVENTS - A method, apparatus and system for acquiring an input event in a computer system comprising different priorities for processes are provided, the method comprising: executing a servant process and a master process, wherein the servant process comprises a higher priority than the master process and an input event list; and upon the servant process acquiring an input event and determining that the input event is in the input event list, the servant process transmitting the input event to the master process. A servant process with a high priority is used to acquire input events, which facilitates the operation of other processes and enhances process execution efficiency. | 2016-01-28 |
20160026512 | CROSS-PLATFORM EVENT ENGINE - A system for handling event input between disparate platforms includes a memory containing instructions executable by the processor whereby the system is operable to recognize an event associated with a first platform, the event having semantic content, translate the event into a form recognizable by a second platform, and communicate the event in the translated form to the second platform. The second platform is configured for effectuating the semantic content of the event | 2016-01-28 |
20160026513 | AYSNCHRONOUS COMMUNICATIONS HAVING COMPOUNDED RESPONSES - A first request to execute a first task is received from a first module in a first address space and by a second module in a second address space. The first task is placed into a task queue for execution in the second address space. Pending responses not yet returned to the first module that are results of execution for other tasks in the second address space are extracted by the second module from a response queue. Requests for the other tasks were previously sent by the first module to the second module for execution in the second address space. The pending responses are compounded. The pending responses and a return value for acknowledgement the first request to execute the first task are combined, by the second module into a combined communication. The combined communication is transmitted by the second module to the first module in the first address space. | 2016-01-28 |
20160026514 | STATE MIGRATION FOR ELASTIC VIRTUALIZED COMPONENTS - A capability for supporting an elastic virtualized component that is stateful is provided by supporting state migration for the elastic virtualized component. The elastic virtualized component may support a virtualized network function or any other suitable virtualized function. The elastic virtualized component includes a component load balancer and a set of component instances configured to provide functions of the elastic virtualized component. The elastic virtualized component may be configured to support migration of state information of the component instances following elasticity events in which the capacity of the elastic virtualized component changes (e.g., in response to growth events in which the number of component instances of which the elastic virtualized component is composed increases, in response to degrowth events in which the number of component instances of which the elastic virtualized component is composed decreases, or the like). | 2016-01-28 |
20160026515 | SEGMENTED SOFTWARE ARCHITECTURE - In one example, a device includes at least one processor and one or more storage devices encoded with instructions that, when executed by the at least one processor, cause the at least one processor to execute a software program comprising three or more segments. Each of the three or more segments includes one or more modules. Each respective module is a member of only one of the three or more segments and implements an interface that enables direct communication between the respective module and modules that are members of any other of the three or more segments. All modules that are members of a respective segment implement a common interface associated with the respective segment. | 2016-01-28 |
20160026516 | PACKET PROCESSING ON A MULTI-CORE PROCESSOR - A method for packet processing on a multi-core processor. According to one embodiment of the invention, a first set of one or more processing cores are configured to include the capability to process packets belonging to a first set of one or more packet types, and a second set of one or more processing cores are configured to include the capability to process packets belonging to a second set of one or more packet types, where the second set of packet types is a subset of the first set of packet types. Packets belonging to the first set of packet types are processed at a processing core of either the first or second set of processing cores. Packets belonging to the second set of packet types are processed at a processing core of the first set of processing cores. | 2016-01-28 |
20160026517 | SYSTEM INTEGRATOR AND SYSTEM INTEGRATION METHOD WITH RELIABILITY OPTIMIZED INTEGRATED CIRCUIT CHIP SELECTION - Disclosed is a computer system for system integration, wherein chip selection for a specific system is performance and reliability optimized and, thereby cost optimized. In the system, a memory stores a chip-level performance specification and a chip-level reliability specifications, each defined for a specific integrated circuit chip that is to be incorporated into a specific system. The memory also stores an inventory that references manufactured instances of the specific integrated circuit chip sorted into bins, which are associated with different performance process windows and which are assigned different reliability levels. A processor uses the inventory to select an instance of the specific integrated circuit chip from one of the bins for actual incorporation into the specific system and does so such that the chip-level performance specification and the chip-level reliability specification are met. Also disclosed are a method and a computer program product that can similarly perform system integration. | 2016-01-28 |
20160026518 | RECOVERY PROGRAM USING DIAGNOSTIC RESULTS - Techniques for recovering an enclosure are provided. A recovery program is retrieved from a recovery program repository. Results from a plurality of diagnostic tests are retrieved. The diagnostic test results are analyzed with the recovery program. The recovery program determines an enclosure recovery action. The enclosure is recovered using the determined recovery action. | 2016-01-28 |
20160026519 | Application Compatibility Leveraging Successful Resolution of Issues - Application compatibility techniques are described. In one or more implementations, one or more computing devices of a service provider receive data from a plurality of client devices via a network, the data describing one or more attempts that were at least partially successful in resolving one or more incompatibilities in execution of one or more applications on respective computing devices. The data is mined based on one or more criteria to identify at least one of the applications and validated to confirm the at least partial success in the resolution of at least one of the incompatibilities for the identified application. Data is stored that describes validated successful resolution of the incompatibilities and an update is disseminated based at least on the stored data to resolve the incompatibilities. | 2016-01-28 |
20160026520 | RAINBOW EVENT DROP DETECTION SYSTEM - In one embodiment, data received from one or more streaming data sources may be monitored by one or more devices. A rate of change in flow of the data received from the one or more streaming data sources may be ascertained. It may be determined whether the rate of change in flow of the data received from the one or more streaming data sources exceeds a threshold rate. Transmission of an alert may be initiated according to a result of determining whether the rate of change in the flow of the data received from the one or more streaming data sources exceeds the threshold rate. | 2016-01-28 |
20160026521 | EXCEPTION WRAPPING SYSTEM - The described technology is directed towards handling errors in an application program that allows for a taxonomy and precedence order of errors. Exception wrapping includes preserving relevant information with an exception, and consolidates a series of errors into a single dominant exception instance that is handled appropriately depending on the exception type. Also described is a centralized exception manager that outputs an interactive dialog based upon the exception type, and takes a recovery action based upon user interaction with the dialog. | 2016-01-28 |