52nd week of 2017 patent applcation highlights part 45 |
Patent application number | Title | Published |
20170371660 | LOAD-STORE QUEUE FOR MULTIPLE PROCESSOR CORES - Technology related to load-store queues for block-based processor architectures is disclosed. In one example of the disclosed technology, a processor includes multiple processor cores and a load-store queue. Each processor core is configured to execute an instruction block including load and store instructions. The instruction block can be identified by a block identifier, and each of the load and store instructions is identified with a load-store identifier. The load-store queue can be configured to enqueue load and store instructions from the processor cores in a buffer indexed based on a function of the block identifier and the load-store identifier. The buffer can be searched for store instructions having a target address matching a target address of a load instruction received from a first processor core. Load response data can be returned for the received load instruction to the first processor core based on the search of the buffer. | 2017-12-28 |
20170371661 | METHOD AND APPARATUS FOR IMPLEMENTING POWER MODES IN MICROCONTROLLERS USING POWER PROFILES - A method and apparatus for implementing power modes in microcontrollers (MCUs) using power profiles. In one embodiment of the method, a central processing unit (CPU) of the MCU executes a first instruction for calling a subroutine stored in a memory of the MCU, wherein the first instruction comprises a first parameter to be passed to the subroutine. Thereafter the CPU writes a first value to a first special function register (SFR) of the MCU in response to executing the first instruction, wherein the first value is related to the first parameter. The MCU operates in a first power mode in response to the CPU writing the first value to the first SFR. The CPU also executes a second instruction for calling the subroutine, wherein the second instruction comprises a second parameter to be passed to the subroutine. In response the CPU writes a second value to a second SFR of the MCU in response to executing the second instruction, wherein the second value is related to the second parameter. The MCU operates in a second power mode in response to the CPU writing the second value to the second SFR. The MCU consumes more power operating in the first power mode than it does when operating in the second power mode. | 2017-12-28 |
20170371662 | EXTENSION OF REGISTER FILES FOR LOCAL PROCESSING OF DATA IN COMPUTING ENVIRONMENTS - A mechanism is described for facilitating extension of register files in computing environments. A method of embodiments, as described herein, includes facilitating, inside an extended register file, performance of one or more tasks relating to an instruction, where the one or more tasks are performed by an extension mechanism being hosted inside the extended register file of a computing device. | 2017-12-28 |
20170371663 | GLOBAL CAPABILITIES TRANSFERRABLE ACROSS NODE BOUNDARIES - Example implementations relate to global capabilities transferrable across node boundaries. For example, in an implementation, a switch that routes traffic between a node and global memory may receive an instruction from the node. The switch may recognize that data referenced by the instruction is a global capability, and the switch may process that global capability accordingly. | 2017-12-28 |
20170371664 | PROGRAM INFORMATION GENERATION SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT - A program information generation system includes circuitry configured to acquire a program and operation information, the program including a plurality of instruction codes including a start instruction code for starting a critical section and an end instruction code for ending the critical section, the operation information indicating an execution order of the plurality of instruction codes; identify the instruction code included in a first section corresponding to the critical section from the operation information, on the basis of the start instruction code, the end instruction code, and the operation information; determine a second section, corresponding to the first section, within the program on the basis of the instruction code included in the first section; and generate classification information for allowing specification of the instruction code included in the critical section or the instruction code included in a non-critical section, on the basis of the second section. | 2017-12-28 |
20170371665 | SYSTEM AND METHOD FOR PROCESSING DATA IN A COMPUTING SYSTEM - Systems, apparatuses, and methods for adjusting group sizes to match a processor lane width are described. In early iterations of an algorithm, a processor partitions a dataset into groups of data points which are integer multiples of the processing lane width of the processor. For example, when performing a K-means clustering algorithm, the processor determines that a first plurality of data points belong to a first group during a given iteration. If the first plurality of data points is not an integer multiple of the number of processing lanes, then the processor reassigns a first number of data points from the first plurality of data points to one or more other groups. The processor then performs the next iteration with these first number of data points assigned to other groups even though the first number of data points actually meets the algorithmic criteria for belonging to the first group. | 2017-12-28 |
20170371666 | EFFECTIVENESS AND PRIORITIZATION OF PREFETECHES - A method, system, and computer program product are provided for prioritizing prefetch instructions. The method includes a processor issuing a prefetch instruction and fetching elements from a cache that can include a memory or a higher level cache. The processor stores the elements in temporary storage and monitors for accesses by an instruction. The processor stores a record representing the prefetch instruction. The processor updates the record with an indicator and issues a new prefetch instruction by comparing the new prefetch instruction to the record, based on the new prefetch instruction matching the prefetch instruction, assigning the indicator to the new prefetch instruction as a priority value, based on the new prefetch instruction not matching the prefetch instruction, assigning a default value to the new prefetch instruction as the priority value, and determining whether to execute the new prefetch instruction, based on the priority value of the new prefetch instruction. | 2017-12-28 |
20170371667 | SYSTEM AND METHOD OF MERGING PARTIAL WRITE RESULT DURING RETIRE PHASE - A processor including a physical register file, a rename table, mapping logic, size tracking logic, and merge logic. The rename table maps an architectural register with a larger index and a smaller index. The mapping logic detects a partial write instruction that specifies an architectural register that is already identified by an entry of the rename table mapped to a second physical register allocated for a larger write operation, and includes an index for the allocated register for the partial write instruction into the smaller index location of the entry. The size tracking logic provides a merge indication for the partial write instruction if the write size of the previous write instruction is larger. The merge logic merges the result of the partial write instruction with the second physical register during retirement of the partial write instruction. | 2017-12-28 |
20170371668 | VARIABLE BRANCH TARGET BUFFER (BTB) LINE SIZE FOR COMPRESSION - Embodiments include method, systems and computer program products for variable branch target buffer line size for compression. In some embodiments, a branch target buffer (BTB) congruence class for a line of a first parent array of a BTB may be determined. A threshold indicative of a maximum number branches to be stored in the line may be set. A branch may be received to store in the line of the first parent array. A determination may be made that storing the branch in the line would exceed the threshold and the line can be responsively split into an even half line and an odd half line. | 2017-12-28 |
20170371669 | BRANCH TARGET PREDICTOR - A method for predicting a fetch address of a next instruction to be fetched includes selecting, at a processor, a first way identifier or a second way identifier as a way pointer based on an active fetch address and historical prediction data. A first predictor table includes a first entry having the first way identifier and a second predictor table includes a second entry having the second way identifier. The method also includes selecting a first or second fetch address as a predicted fetch address based on the way pointer. A target table includes a first way storing the first fetch address and a second way storing the second fetch address. The first way and the second way are associated with the active fetch address. The first fetch address is associated with the first way identifier and the second fetch address is associated with the second way identifier. | 2017-12-28 |
20170371670 | STREAM BASED BRANCH PREDICTION INDEX ACCELERATOR FOR MULTIPLE STREAM EXITS - A computer-implemented method for predicting a taken branch that ends an instruction stream in a pipelined high frequency microprocessor includes receiving, by a processor, a first instruction within a first instruction stream, the first instruction comprising a first instruction address; searching, by the processor, an index accelerator predictor one time for the stream; determining, by the processor, a prediction for a taken branch ending the branch stream; influencing, by the processor, a metadata prediction engine based on the prediction; observing a plurality of taken branches from the exit accelerator predictor; maintaining frequency information based on the observed taken branches; determining, based on the frequency information, an updated prediction of the observed plurality of taken branches; and updating, by the processor, the index accelerator predictor with the the updated prediction. | 2017-12-28 |
20170371671 | STREAM BASED BRANCH PREDICTION INDEX ACCELERATOR WITH POWER PREDICTION - A computer-implemented method for predicting a taken branch that ends an instruction stream in a pipelined high frequency microprocessor includes receiving, by a processor, a first instruction within a first instruction stream, the first instruction including a first instruction address. The computer-implemented method further includes searching, by the processor, a stream-based index accelerator predictor one time for the stream; determining, by the processor, a prediction for a branch ending the branch stream; influencing, by the processor, a metadata prediction engine based on the prediction; and updating, by the processor, a stream-based index accelerator predictor with information indicative of the prediction. | 2017-12-28 |
20170371672 | STREAM BASED BRANCH PREDICTION INDEX ACCELERATOR FOR MULTIPLE STREAM EXITS - A computer-implemented method for predicting a taken branch that ends an instruction stream in a pipelined high frequency microprocessor includes receiving, by a processor, a first instruction within a first instruction stream, the first instruction comprising a first instruction address; searching, by the processor, an index accelerator predictor one time for the stream; determining, by the processor, a prediction for a taken branch ending the branch stream; influencing, by the processor, a metadata prediction engine based on the prediction; observing a plurality of taken branches from the exit accelerator predictor; maintaining frequency information based on the observed taken branches; determining, based on the frequency information, an updated prediction of the observed plurality of taken branches; and updating, by the processor, the index accelerator predictor with the the updated prediction. | 2017-12-28 |
20170371673 | PROCESSOR WITH SLAVE FREE LIST THAT HANDLES OVERFLOW OF RECYCLED PHYSICAL REGISTERS AND METHOD OF RECYCLING PHYSICAL REGISTERS IN A PROCESSOR USING A SLAVE FREE LIST - A processor including physical registers, a reorder buffer, a master free list, a slave free list, a master recycle circuit, and a slave recycle circuit. The reorder buffer includes instruction entries in which each entry stores physical register indexes for recycling physical registers. The reorder buffer retires up to N instructions in each processor cycle. Each master and slave free list includes N input ports and stores physical register indexes, in which the master free list stores indexes of physical registers to be allocated to instructions being issued. When an instruction is retired, the master recycle circuit routes a first physical register index stored in an instruction entry of the instruction to an input port of the master free list, and the slave recycle circuit routes a second physical register index stored in the instruction entry of the instruction to an input port of the slave free list. | 2017-12-28 |
20170371674 | ARITHMETIC PROCESSING UNIT AND CONTROL METHOD FOR ARITHMETIC PROCESSING UNIT - An apparatus includes: a cache to retain an instruction; an instruction-control circuit to read out the instruction from the cache; and an instruction-execution circuit to execute the instruction read out from the cache, wherein the cache includes: a pipeline processing circuit including a plurality of selection stages in each of which, among a plurality of requests for causing the cache to operate, a request having a priority level higher than priority levels of other requests is outputted to a next stage and a plurality of processing stages in each of which processing based on a request outputted from a last stage among the plurality of selection stages is sequentially executed; and a cache-control circuit to input a request received from the instruction-control circuit to the selection stage in which processing order of the processing stage is reception order of the request. | 2017-12-28 |
20170371675 | Iteration Synchronization Construct for Parallel Pipelines - Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing an iteration synchronization construct (ISC) for a parallel pipeline. The apparatus may initialize a first instance of the ISC for a first stage iteration of a first parallel stage of the parallel pipeline and a second instance of the ISC for a second stage iteration of the first parallel stage of the parallel pipeline. The apparatus may determine whether an execution control value is specified for the first stage iteration, and add a first execution control edge to the parallel pipeline after determining that an execution control value is specified for the first stage iteration. The apparatus may determine whether execution of the first stage iteration is complete and send a ready signal from the first instance of the ISC to the second instance if the ISC after determining that execution of the first stage iteration completed. | 2017-12-28 |
20170371676 | MULTI-MULTIDIMENSIONAL COMPUTER ARCHITECTURE FOR BIG DATA APPLICATIONS - A data processing apparatus is provided comprising a front-end interface electronically coupled to a main processor. The front-end interface is configured to receive data stored in a repository, in particular an external storage and/or a network, determine whether the data is a single-access data or a multiple-access data by analyzing an access parameter designating the data, route the multiple-access data for processing by the main processor, and route the single-access data for pre-processing by the front-end interface and routing results of the pre-processing to the main processor. | 2017-12-28 |
20170371677 | MANAGING INVOCATION OF TASKS - A graph-based program specification includes components, at least one having at least one input port for receiving a collection of data elements, or at least one collection type output port for providing a collection of data elements. Executing a program specified by the graph-based program specification at a computing node, includes: receiving data elements of a first collection into a first storage in a first order via a link connected to a collection type output port of a first component and an input port of a second component, and invoking a plurality of instances of a task corresponding to the second component to process data elements of the first collection, including retrieving the data elements from the first storage in a second order, without blocking invocation of any of the instances until after any particular instance completes processing one or more data elements. | 2017-12-28 |
20170371678 | METHOD AND APPARATUS FOR RUNNING GAME CLIENT - The present disclosure belongs to the field of computer technologies, and discloses a method and apparatus for running a game client. The method includes: receiving a startup instruction of a target game client, and sending a startup request corresponding to the target game client to a server; receiving startup data, sent by the server, corresponding to the target game client, and starting, based on the startup data, the target game client; sending, when a preset data obtaining condition of a target game unit in the corresponding target game client is satisfied, a data request carrying a unit identifier of the target game unit to the server; and receiving operating data of the target game unit sent by the server, and running, based on the operating data, the target game unit. By means of the present disclosure, storage resources of a mobile terminal can be saved. | 2017-12-28 |
20170371679 | IDENTIFICATION OF BOOTABLE DEVICES - A method for managing an initiation of a computing system. In an embodiment, the method includes a computer processor detecting that a first computing system receives a request to initiate a second computing system. The method further includes accessing a table that includes information associated with a plurality of storage entities that include bootable OS images, where the plurality of storage entities are included in at least one storage system. The method further includes determining a first storage entity that includes a corresponding instance of a first bootable OS image of the requested second computing system. The method further includes initiating the requested second computing system based, at least in part, on the instance of the bootable OS image of the first storage entity. | 2017-12-28 |
20170371680 | IDENTIFICATION OF BOOTABLE DEVICES - A method for managing an initiation of a computing system. In an embodiment, the method includes a computer processor detecting that a first computing system receives a request to initiate a second computing system. The method further includes accessing a table that includes information associated with a plurality of storage entities that include bootable OS images, where the plurality of storage entities are included in at least one storage system. The method further includes determining a first storage entity that includes a corresponding instance of a first bootable OS image of the requested second computing system. The method further includes initiating the requested second computing system based, at least in part, on the instance of the bootable OS image of the first storage entity. | 2017-12-28 |
20170371681 | SYSTEMS AND METHODS FOR USING DISTRIBUTED UNIVERSAL SERIAL BUS (USB) HOST DRIVERS - Systems and methods for using distributed Universal Serial Bus (USB) host drivers are disclosed. In one aspect, USB packet processing that was historically done on an application processor is moved to a distributed USB driver running in parallel on a low-power processor such as a digital signal processor (DSP). While a DSP is particularly contemplated, other processors may also be used. Further, a communication path is provided from the low-power processor to USB hardware that bypasses the application processor. Bypassing the application processor in this fashion allows the application processor to remain in a sleep mode for longer periods of time instead of processing digital data received from the low-power processor or the USB hardware. Further, by bypassing the application processor, latency is reduced, which improves the listener experience. | 2017-12-28 |
20170371682 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A printer driver and an advanced UI application are associated with each other during installation, and the advanced UI application is activated in a different process using a COM when the printer driver is called. | 2017-12-28 |
20170371683 | PROVISIONING THE HYPER-CONVERGED INFRASTRUCTURE BARE METAL SYSTEMS FROM THE TOP OF THE RACK SWITCH - Methods and devices for provisioning a hyper-converged infrastructure of bare metal systems are disclosed herein. Two fabric elements are configured in a master-slave arrangement to ensure high availability. ONIE capable fabric elements may be pre-installed with an operating system as firmware to run open network operating systems, such as Linux. The Linux operating system includes a KVM hypervisor to run virtual machines. An operating system of the virtual machines can access an external network by creating a bridge between switch management ports and a virtual network interface. New node elements may be added by connecting the network ports of the new node element to the fabric elements and booting the new node element in a network/PXE boot mode. The new node element obtains an IP address from a DHCP server and boots an image downloaded from a PXE server. | 2017-12-28 |
20170371684 | PIN CONTROL METHOD AND DEVICE - A pin control method and device are provided. The method may be applied to a first chip and the first chip includes: a sleep pin connected with a wakeup pin on a second chip, a Request To Send (RTS) pin connected with a Clear To Send (CTS) pin on the second chip, a Receive Data (RXD) pin connected with a Transmit Data (TXD) pin on the second chip. The method includes: receiving, by the sleep pin, a data sending signal sent by the second chip; setting the RTS pin into an effective state according to the data sending signal; receiving, by the RXD pin, data sent by the second chip, the RXD pin being in the effective state when the RTS pin is in the effective state; receiving, by the sleep pin, a transmission completion signal sent by the second chip; setting the RTS pin into an ineffective state according to the transmission completion signal; and determining, according to a current running condition, whether to enter a sleep state. | 2017-12-28 |
20170371685 | USER INTERFACE EXECUTION APPARATUS AND USER INTERFACE DESIGNING APPARATUS - The present invention has an object of providing a user interface execution apparatus and a user interface designing apparatus which can estimate the maximum size of a storage area for storing data to be prefetched when a user interface is designed and can present updated data to the user even when the prefetched data is updated after the prefetch. A user interface execution apparatus in the present invention includes a processor to execute a program; and a memory to store the program which, when executed by the processor, performs processes of: transitioning a state of the user interface execution apparatus; issuing a prefetch request for data; storing the data; generating the code from an interface definition and a state transition definition; and selecting, before transitioning the state, data to be prefetched based on a difference between a data obtaining interface to be used in a state before the transitioning and a data obtaining interface to be used in a state after the transitioning. | 2017-12-28 |
20170371686 | NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING COMPUTER-READABLE INSTRUCTIONS FOR CAUSING INFORMATION PROCESSING DEVICE TO EXECUTE IMAGE PROCESS, AND INFORMATION PROCESSING DEVICE - An information processing device may read, from a shared storing area, first identification information indicating K pieces of first applications that are already installed. The information processing device may display first screen on the display. The information processing device may cause an operating system to display the K pieces of first images in the first screen. Each of K pieces of link information may be associated with a corresponding one of the K pieces of first images. When any one of the K pieces of first images receives an input operation, the operating system may activate the corresponding first application indicated by the link information associated with the first image that receives the input operation. | 2017-12-28 |
20170371687 | AUTOMATED GLOBALIZATION ENABLEMENT ON DEVELOPMENT OPERATIONS - Techniques are disclosed for providing dynamic globalization enablement for developing an application during software development. A globalization development operation information system (GDOIS) retrieves source code for the application, which is assigned to support specified globalization features. The GDOIS evaluates the source code for each of the plurality of specified globalization features. Upon determining that the source code does not include at least a first specified globalization feature, the GDOIS identifies an application programming interface (API) associated with the feature. The GDOIS inserts source code associated with the API into the source code for the application. | 2017-12-28 |
20170371688 | METHOD FOR PROVIDING ADDITIONAL INFORMATION ABOUT APPLICATION AND ELECTRONIC DEVICE FOR SUPPORTING THE SAME - An electronic device and method are disclosed. The electronic device includes a communication unit, a display, a memory and a processor. The processor implements the method, including analyzing activity of an application to identify at least one function of the application added, deleted or altered by an update to the application, and controlling the display to display at least one item selectable to provide additional information corresponding to the identified at least one new function. | 2017-12-28 |
20170371689 | LAYERED VIRTUAL MACHINE INTEGRITY MONITORING - Various embodiments are generally directed to the provision and use of various hardware and software components of a computing device to monitor the state of layered virtual machine (VM) monitoring software components. An apparatus includes a first processor element; and logic to receive an indication that a first timer has reached an end of a first period of time, monitor execution of a VMM (virtual machine monitor) watcher by a second processor element, determine whether the second processor element completes execution of the VMM watcher to verify integrity of a VMM before a second timer reaches an end of a second period of time, and transmit an indication of the determination to a computing device. Other embodiments are described and claimed. | 2017-12-28 |
20170371690 | DATABASE SYSTEMS AND RELATED METHODS FOR VALIDATION WORKFLOWS - Computing systems, database systems, and related methods are provided for supporting dynamic validation workflows. One exemplary method involves a server of a database system receiving a graphical representation of a validation process from a client device coupled to a network, converting the graphical representation of the validation process into validation code, and storing the validation code at the database system in association with a database object type. Thereafter, the validation process is performed with respect to an instance of the database object type using the validation code in response to an action with respect to the instance of the database object type in a database of the database system. The action triggering the validation process can be based on user-configurable triggering criteria, and the validation process may generate user-configurable notifications based on one or more field values of the database object instance. | 2017-12-28 |
20170371691 | Hypervisor Exchange With Virtual Machines In Memory - A hypervisor-exchange process includes: suspending, by an “old” hypervisor, resident virtual machines; exchanging the old hypervisor for a new hypervisor, and resuming, by the new hypervisor, the resident virtual machines. The suspending can include “in-memory” suspension of the virtual machines until the virtual machines are resumed by the new hypervisor. Thus, there is no need to load the virtual machines from storage prior to the resuming. As a result, any interruption of the virtual machines is minimized. In some embodiments, the resident virtual machines are migrated onto one or more host virtual machines to reduce the number of virtual machines being suspended. | 2017-12-28 |
20170371692 | OPTIMIZED VIRTUAL NETWORK FUNCTION SERVICE CHAINING WITH HARDWARE ACCELERATION - Systems and methods for Virtual Network Function (VNF) service chain optimization include, responsive to a request, determining placement for one or more VNFs in a VNF service chain based on a lowest cost determination; configuring at least one programmable region of acceleration hardware for at least one VNF of the one or more VNFs; and activating the VNF service chain. The lowest cost determination can be based on a service chain cost model that assigns costs based on connectivity between switching elements and between hops between sites. The activating can include a Make-Before-Break (MBB) operation in a network to minimize service interruption of the VNF service chain. | 2017-12-28 |
20170371693 | MANAGING CONTAINERS AND CONTAINER HOSTS IN A VIRTUALIZED COMPUTER SYSTEM - One example relates to a computer system that includes a plurality of host computers each executing a hypervisor. The computer system further includes a virtualization manager having an application programming interface (API) configured to manage the hypervisor on each of the plurality of host computers, the virtualization manager configured to create a virtual container host within a resource pool that spans the plurality of host computers. The computer system further includes a plurality of container virtual machines (VMs) in the virtual container host configured to consume resources in the resource pool. The computer system further includes a daemon appliance executing in the virtual container host configured to invoke the API of the virtualization manager to manage the plurality of container VMs in response to commands from one or more clients. | 2017-12-28 |
20170371694 | VIRTUALIZATION OF A GRAPHICS PROCESSING UNIT FOR NETWORK APPLICATIONS - An accelerated processing unit includes a first processing unit configured to implement one or more virtual machines and a second processing unit configured to implement one or more acceleration modules. The one or more virtual machines are configured to provide information identifying a task or data to the one or more acceleration modules via first queues. The one or more acceleration modules are configured to provide information identifying results of an operation performed on the task or data to the one or more virtual machines via one or more second queues. | 2017-12-28 |
20170371695 | Techniques for Persistent Memory Virtualization - Examples may include techniques for persistent memory virtualization. Persistent memory maintained at one or more memory devices coupled with a host computing device may be allocated and assigned to a virtual machine (VM) hosted by the host computing device. The allocated persistent memory based on a file based virtual memory to be used by the VM. An extended page table (EPT) may be generated to map physical memory pages of the one or more memory devices to virtual logical blocks of the file based virtual memory. Elements of the VM then enumerate a presence of the assigned allocated persistent memory, create a virtual disk abstraction for the file based virtual memory and use the EPT to directly access the assigned allocated persistent memory. | 2017-12-28 |
20170371696 | CROSS-CLOUD PROVIDER VIRTUAL MACHINE MIGRATION - A method for migrating a virtual machine (VM) includes establishing a first connection to a first cloud computing system executing a first VM, and establishing a second connection to a second cloud computing system managed by a second cloud provider, which is different form the first cloud provider. The method further includes instantiating a second VM designated as a destination VM in the second cloud computing system, and installing a migration agent on each of the first VM and the second VM. The migration agents execute a migration process of the first VM to the second VM by (1) iteratively copying guest data from the first VM to the second VM until a switchover criteria of the migration operation is met, and (2) copying a remainder of guest data from the first VM to the second VM when the switchover criteria is met. | 2017-12-28 |
20170371697 | TEST SYSTEM FOR TESTING A COMPUTER OF A COMPUTER SYSTEM IN A TEST NETWORK - A test system for testing a particular computer of a particular computer system in a test network includes: a simulation server configured to emulate a test object; and a control entity for controlling the simulation server, wherein the control entity is configured to instruct the simulation server to generate a virtual test object for emulating the test object, and to instruct a test entity to test the virtual test object generated by the simulation server. | 2017-12-28 |
20170371698 | VIRTUAL SWITCH FOR MULTI-COMPARTMENT MIXED CRITICAL NETWORK COMMUNICATIONS - The invention concerns a multi-core processing system comprising: a first input/output interface ( | 2017-12-28 |
20170371699 | SYSTEM AND METHOD FOR NESTED HYPERVISORS AND LAYER 2 INTERCONNECTION - Provided is a system and method for a multi-tenant datacenter with nested hypervisors. This is provided by at least two physical computing systems each having at least one processor and memory store adapted to provide a first level Hypervisors, each providing a First Virtual Computing Environment with a plurality of inactive Virtual Hypervisors nested therein. The multi tenant data center is structured and arranged to activate a Virtual Hypervisor on one of the at least two Hypervisors and automatically migrate the at least one Customer VM from a Customer Hypervisor to the Active Virtual Hypervisor; and evacuate the remaining inactive Virtual Hypervisors from the Hypervisor supporting the Active Hypervisor to another of the at least two Hypervisors supporting inactive Virtual Hypervisors. Further, each Customer Virtual Machine in the Active Virtual Hypervisor is coupled to the second physical computing system by OSI Layer 2, prior to an OSI Layer 3 connection, for the transfer of data frames, each frame having a plurality of OSI Layer 2 tags permitting the segregation of each Virtual Machine independent of Layer 3 communication. An associated method of use is also provided. | 2017-12-28 |
20170371700 | Method and Apparatus for Managing Virtual Execution Environments Using Contextual Information Fragments - A computing apparatus includes a processor and a memory coupled with the processor and has a program to be executed in the processor. The program includes instructions for maintaining a plurality of virtual execution environments, determining context meta-data for the plurality of virtual execution environments, collecting current contextual information for the computing apparatus, and activating one or more of the plurality of virtual execution environments on the collected current contextual information and the context meta-data. | 2017-12-28 |
20170371701 | APPARATUSES, METHODS, AND SYSTEMS FOR GRANULAR AND ADAPTIVE HARDWARE TRANSACTIONAL SYNCHRONIZATION - Methods and apparatuses relating to hardware transactions are described. In one embodiment, a processor includes one or more cores to concurrently execute a plurality of transactions, and a hardware transactional circuit to detect an occurrence of a software selected precursor in any of the plurality of transactions and abort at least one of the plurality of transactions on the occurrence unless an interface to software indicates the occurrence is to not cause an abort, wherein the occurrence is not a memory access of shared data by the plurality of transactions. | 2017-12-28 |
20170371702 | SECURED COMPUTING SYSTEM - Examples related to secure computing systems are disclosed. In one example, a method includes, at a local agent computing device, sending to a remote work scheduling computing device a work context of the local agent computing device, the work context describing a set of work that the local agent is configured to execute, and polling a remote work depository for work compatible with the work context. The method further includes receiving a response from the remote work depository identifying a job within the work context, the job being requested by a computing device other than the remote work scheduling computing device, and executing the job. | 2017-12-28 |
20170371703 | ASYNCHRONOUS TASK MANAGEMENT IN AN ON-DEMAND NETWORK CODE EXECUTION ENVIRONMENT - Systems and methods are described for managing asynchronous code executions in an on-demand code execution system or other distributed code execution environment, in which multiple execution environments, such as virtual machine instances, can be used to enable rapid execution of user-submitted code. When asynchronous executions occur, a first execution may call a second execution, but not immediately need the second execution to complete. To efficiently allocate computing resources, this disclosure enables the second execution to be scheduled accordingly to a state of the on-demand code execution system, while still ensuring the second execution completes prior to the time required by the first execution. Scheduling of executions can, for example, enable more efficient load balancing on the on-demand code execution system. | 2017-12-28 |
20170371704 | PROGRAM INFORMATION GENERATING SYSTEM, METHOD, AND PROGRAM PRODUCT - A program information generating system includes circuitry configured to acquire a program including a non-interruption instruction code and an interruption instruction code, and action information indicating an order of execution of the non-interruption instruction code and the interruption instruction code, determine an action interruption position representing a position where interruption has occurred in the action information based on the interruption instruction code and the action information, determine a program interruption position representing a position where interruption has occurred in the program based on the non-interruption instruction code and the action interruption position, and generate program interruption position information for specifying the program interruption position. | 2017-12-28 |
20170371705 | TERMINAL APPARATUS - A terminal apparatus includes a storage generation unit that generates a storage module that stores, in association with information concerning a process related to a job that is performed by executing multiple processes in a sequential order, and information concerning a process to be performed with a system connected from among the multiple processes, screen data of the system used in the process with the system connected and a display controller that performs control to display the screen data if the system is unconnectable when a request to execute the process is received. | 2017-12-28 |
20170371706 | ASYNCHRONOUS TASK MANAGEMENT IN AN ON-DEMAND NETWORK CODE EXECUTION ENVIRONMENT - Systems and methods are described for managing asynchronous code executions in an on-demand code execution system or other distributed code execution environment, in which multiple execution environments, such as virtual machine instances, can be used to enable rapid execution of user-submitted code. When asynchronous executions occur, one execution may become blocked while waiting for completion of another execution. Because the on-demand code execution system contains multiple execution environments, the system can efficiently handle a blocked execution by saving a state of the execution, and removing it from its execution environment. When a blocking dependency operation completes, the system can resume the blocked execution using the state information, in the same or different execution environment. | 2017-12-28 |
20170371707 | DATA ANALYSIS IN STORAGE SYSTEM - Embodiments of the present disclosure provide a method of analyzing data in a storage system, a storage system, and a computer program product. The method includes: in response to detecting a request for a data analytic job, obtaining target data for the data analytic job from a first storage device of the storage system. The method also includes storing the target data into a second storage device of the storage system that is assigned for data analysis, and performing the data analytic job using a data processing device and the second storage device in the storage system. | 2017-12-28 |
20170371708 | AUTOMATIC PLACEMENT OF VIRTUAL MACHINE INSTANCES - A virtual computer system service receives a request from a customer to instantiate a virtual machine instance onto a computing device. The virtual computer system service obtains a set of preferences from the request that can be used for selecting the computing device from a variety of data zones. The virtual computer system service identifies one or more data zones where virtual machine instances of the customer are operating. Based on the set of preferences and the one or more data zones where the virtual machine instances are operating, the virtual computer system service selects a data zone where the virtual machine instance can be instantiated. The virtual computer system service uses a computing device in the selected data zone to instantiate the virtual machine instance. | 2017-12-28 |
20170371709 | OPTIMIZING SIMULTANEOUS STARTUP OR MODIFICATION OF INTER-DEPENDENT MACHINES WITH SPECIFIED PRIORITIES - Identify individual machines of a multi-machine computing system. Construct a graph of dependencies among the machines. Obtain estimated total administration times and administration priorities for each of the machines. Identify availability of administration resources to assist in administration of one or more of the machines. Select a first set of machines for administration in response to the graph, administration priorities, estimated total administration times, and availability of the first set of administration resources, and administer the first set of machines in parallel using the first set of administration resources. Update the graph in response to administration of the first set of machines. Select a subsequent set of machines for administration in response to the updated graph, administration priorities, estimated total administration times, and availability of a subsequent set of administration resources. Administer the subsequent set of machines in parallel using the subsequent set of administration resources. | 2017-12-28 |
20170371710 | DETECTING AND ENFORCING CONTROL-LOSS RESTRICTIONS WITHIN AN APPLICATION PROGRAMMING INTERFACE - The disclosed herein provides a method, system, and/or computer program product for determining control of a processing resource. To determine control of the processing resource, the method, system, and/or computer program can set a control-loss flag indicating whether a process has control of the processing resource and check the control-loss flag to determine whether the process lost control of the processing resource. | 2017-12-28 |
20170371711 | ELIMINATING EXECUTION OF JOBS-BASED OPERATIONAL COSTS OF RELATED REPORTS - Optimizing operational costs in a computing environment includes identifying high-cost jobs that are executed to generate one or more reports in the computing environment, identifying one or more reports the generation of which is dependent on the execution of the high-cost jobs, and culling at least a first job from among the high-cost jobs, in response to determining that a benefit achieved from the reports that depend on the first job does not justify costs associated with generating the reports. | 2017-12-28 |
20170371712 | HIERARCHICAL PROCESS GROUP MANAGEMENT - Management of hierarchical process groups is provided. Aspects include creating a group identifier having an associated set of resource limits for shared resources of a processing system. A process is associated with the group identifier. A hierarchical process group is created including the process as a parent process and at least one child process spawned from the parent process, where the at least one child process inherits the group identifier. A container is created to store resource usage of the hierarchical process group and the set of resource limits of the group identifier. The set of resources associated with the hierarchical process group is used to collectively monitor resource usage of a plurality of processes in the hierarchical process group. | 2017-12-28 |
20170371713 | INTELLIGENT RESOURCE MANAGEMENT SYSTEM - The specification relates to an intelligent resource management system. The system is capable of receiving a job script file requesting to run analyses for a data file on a multi-CPU system using a multi-threaded application. The system then builds an application knowledge structure and an intelligent resource mapping table based on the application knowledge structure with the intelligent resource mapping table requesting a number of CPUs needed for the analysis. The data file can be partitioned into a number of data segments equaling to the number of CPUs needed for the analysis and a number of application instances equal to the number of CPUs needed for the analysis can be created. The multi-threaded applications are executed on a plurality of CPUs for each bio-informatics data segment and resultants are obtained for each execution. These resultants are combined in the same order of data partitioning to obtain analysis. | 2017-12-28 |
20170371714 | APPLICATION INTERFACE ON MULTIPLE PROCESSORS - A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g., GPUs or CPUs, separate from the host processor. | 2017-12-28 |
20170371715 | PREDICTIVE OPTIMIZATION OF NEXT TASK THROUGH ASSET REUSE - A first category is determined of a first task being performed at a given time. A first asset that is configured for use with the first category is identified. A next task object is constructed. By analyzing a set of tasks that were performed during a period prior to the given time, a candidate next task is identified. The candidate next task has been performed sometime after a previous performance of the first task during the period. From the first asset, a link to a second asset is selected. The second asset is configured for use with a second category of the candidate next task. The next task object is populated with the link. The candidate next task is designated as a second task that will occur sometime after the first task. | 2017-12-28 |
20170371716 | IDENTIFIER (ID) ALLOCATION IN A VIRTUALIZED COMPUTING ENVIRONMENT - Example methods are provided for a first node to perform identifier (ID) allocation in a virtualized computing environment that includes a cluster formed by the first node and at least one second node. The method may comprise retrieving, from a pool of IDs associated with the cluster, a batch of IDs to a cache associated with the first node. The pool of IDs may be shared within the cluster and the batch of IDs retrieved for subsequent ID allocation by the first node. The method may also comprise, in response to receiving a request for ID allocation from an ID consumer, allocating one or more IDs from the batch of IDs in the cache to respective one or more objects for unique identification of the one or more objects across the cluster; and sending, to the ID consumer, a response that includes the allocated one or more IDs. | 2017-12-28 |
20170371717 | RESOURCE MANAGEMENT IN CLOUD SYSTEMS - A method for waking up one or more sleeping small cell base stations in a wireless communication system for serving a user equipment is described. The wireless communication system includes a plurality of small cell base stations and one or more macro base stations. A wake up signal configuration is received at a user equipment, and a wake up signal configured in accordance with the received wake up signal configuration is transmitted by the user equipment. | 2017-12-28 |
20170371718 | CONTENT-BASED DISTRIBUTION AND EXECUTION OF ANALYTICS APPLICATIONS ON DISTRIBUTED DATASETS - Methods are provided. A method includes announcing to a network meta information describing each of a plurality of distributed data sources. The method further includes propagating the meta information amongst routing elements in the network. The method also includes inserting into the network a description of distributed datasets that match a set of requirements of the analytics task. The method additionally includes delivering, by the routing elements, a copy of the analytics task to locations of respective ones of the plurality of distributed data sources that include the distributed datasets that match the set of requirements of the analytics task. | 2017-12-28 |
20170371719 | TEMPERATURE-AWARE TASK SCHEDULING AND PROACTIVE POWER MANAGEMENT - Systems, apparatuses, and methods for performing temperature-aware task scheduling and proactive power management. A SoC includes a plurality of processing units and a task queue storing pending tasks. The SoC calculates a thermal metric for each pending task to predict an amount of heat the pending task will generate. The SoC also determines a thermal gradient for each processing unit to predict a rate at which the processing unit's temperature will change when executing a task. The SoC also monitors a thermal margin of how far each processing unit is from reaching its thermal limit. The SoC minimizes non-uniform heat generation on the SoC by scheduling pending tasks from the task queue to the processing units based on the thermal metrics for the pending tasks, the thermal gradients of each processing unit, and the thermal margin available on each processing unit. | 2017-12-28 |
20170371720 | MULTI-PROCESSOR APPARATUS AND METHOD OF DETECTION AND ACCELERATION OF LAGGING TASKS - A method and processing apparatus for accelerating program processing is provided that includes a plurality of processors configured to process a plurality of tasks of a program and a controller. The controller is configured to determine, from the plurality of tasks being processed by the plurality of processors, a task being processed on a first processor to be a lagging task causing a delay in execution of one or more other tasks of the plurality of tasks. The controller is further configured to provide the determined lagging task to a second processor to be executed by the second processor to accelerate execution of the lagging task. | 2017-12-28 |
20170371721 | General Purpose Distributed Data Parallel Computing Using a High Level Language - General-purpose distributed data-parallel computing using a high-level language is disclosed. Data parallel portions of a sequential program that is written by a developer in a high-level language are automatically translated into a distributed execution plan. The distributed execution plan is then executed on large compute clusters. Thus, the developer is allowed to write the program using familiar programming constructs in the high level language. Moreover, developers without experience with distributed compute systems are able to take advantage of such systems. | 2017-12-28 |
20170371722 | INTELLIGENT MEDIATION OF MESSAGES IN A HEALTHCARE PRODUCT INTEGRATION PLATFORM - Example systems and methods to mediate messages in a product integration platform are disclosed and described. An example method includes receiving a request message from an integration platform, the request message being received in a canonical data format. The example method includes denormalizing the request message into an interface request message format, the interface request message format corresponding to a message format implemented by an interface. The example method includes translating the request message from the interface request message format into a target healthcare system format, the target healthcare system format corresponding to a message format implemented by a target healthcare system, wherein the interface is associated with the target healthcare system. The example method includes sending the request message to the target healthcare system. | 2017-12-28 |
20170371723 | NOTIFICATION SERVICE IN A DECENTRALIZED CONTROL PLANE OF A COMPUTING SYSTEM - A method of providing notifications in a control plane of a computer system includes executing a service host process of the control plane on a software platform of the computer system, the service host process managing services of the control plane and a persistent document store that stores service states for the services. The method may include creating a query task service of the control plane, a service state of the query task service including a query filter; evaluating each of the service states against the query filter as each of the service states is added to the persistent document store; updating the service state of the query task service for each of the service states that satisfies the query filter; and sending a notification to a plurality of subscribers of the query task service in response to each update to the service state of the query task service. | 2017-12-28 |
20170371724 | EVENT-DRIVEN COMPUTING - A service manages a plurality of virtual machine instances for low latency execution of user codes. The service can provide the capability to execute user code in response to events triggered on various event sources and initiate execution of other control functions to improve the code execution environment in response to detecting errors or unexpected execution results. The service may maintain or communicate with a separate storage area for storing code execution requests that were not successfully processed by the service. Requests stored in such a storage area may subsequently be re-processed by the service. | 2017-12-28 |
20170371725 | HARDWARE MULTI-THREADING CO-SCHEDULING FOR PARALLEL PROCESSING SYSTEMS - A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core. | 2017-12-28 |
20170371726 | RAPID PREDICTIVE ANALYSIS OF VERY LARGE DATA SETS USING AN ACTOR-DRIVEN DISTRIBUTED COMPUTATIONAL GRAPH - A system for predictive analysis of very large data sets using an actor-driven distributed computational graph, wherein a pipeline orchestrator creates and manages individual data pipelines while providing data caching to enable interactions between specific activity actors within pipelines. Each pipeline then comprises a pipeline manager that creates and manages individual activity actors and directs operations within the pipeline while reporting back to the pipeline orchestrator. | 2017-12-28 |
20170371727 | EXECUTION OF INTERACTION FLOWS - Examples relate to execution of interaction flows. The examples disclosed herein enable obtaining, via a user interface of a local client computing device, an interaction flow that defines an order of execution of a plurality of interaction points and values exchanged among the plurality of interaction points, the plurality of interaction points comprising a first interaction point that indicates an event executed by an application; triggering the execution of the interaction flow; determining whether any of remote client computing devices that are in communication with the local client computing device includes the application; and causing the first interaction point to be executed by the application in at least one of the remote client computing devices that are determined to include the application. | 2017-12-28 |
20170371728 | PROGRAMMATIC IMPLEMENTATIONS GENERATED FROM AN API CALL LOG - Systems and methods for generating a programmatic implementation based on a set of recorded API calls. One example includes determining an interval of time during which actions made on an interface associated with a session user account are made, obtaining a set of records from an API call log that indicates a set of API calls made during the interval of time, and generating a programmatic implementation that is usable to submit the set of API calls. | 2017-12-28 |
20170371729 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM AND PROGRAM - An information processing device includes: an acquisition unit configured to acquire a determination result of a state of a user, who has given a transmission job execution instruction, determined based on biological information of the user; and a job control unit configured to control an execution of the transmission job according to the user state determination result, wherein when it is determined that the user is in an off-normal state, the job control unit executes a confirmation request process to request the user to make a confirmation related to the transmission job. | 2017-12-28 |
20170371730 | ACTION RECOMMENDATION TO REDUCE SERVER MANAGEMENT ERRORS - An actuator to execute on a server may be automatically selected based on risk of failure and damage to the server. Requirement specification and environment parameters may be received. A subset of actuators may be selected based on a risk threshold from an actuator catalog database storing actuator information and actuator risk metadata associated with a plurality of actuators. The actuator risk metadata may be augmented with risk information. A ranked list of the subset of actuators may be generated based on the actuator risk metadata associated with each actuator in the subset. An actuator in the ranked list may be executed on the server. | 2017-12-28 |
20170371731 | CULPRIT MODULE DETECTION AND SIGNATURE BACK TRACE GENERATION - In a crash analysis system, a method for analyzing a core dump corresponding to a crash of a computer system is disclosed. A core dump is received wherein the core dump corresponds to a crash of a computer system. A culprit module responsible for the crash of the computer system is determined. A signature back trace, which pertains to a symptom of the crash of the computer system is generated. | 2017-12-28 |
20170371732 | METHOD FOR DEBUGGING STATIC MEMORY CORRUPTION - An indication is received. The indication is of an address in a first page in virtual memory used by an application with a static memory corruption. A loadable kernel module will monitor the address. Access to the first page in virtual memory is changed from read/write access to read only access. A second page in virtual memory is created with read/write access. Whether a page fault occurs on the first page in virtual memory during the execution of the application with the static memory corruption is determined. | 2017-12-28 |
20170371733 | HYPERVISOR TECHNIQUES FOR PERFORMING NON-FAULTING READS IN VIRTUAL MACHINES - Guest memory data structures are read by one or more read operations which are set up to handle page faults and general protection faults generated during the read in various ways. If such a fault occurs while performing the one or more read operations, the fault is handled and the one or more read operation is terminated. The fault is handled by either dropping the fault and reporting an error instead of the fault, by dropping the fault and invoking an error handler that is set up prior to performing the read operations, or by forwarding the fault to a fault handler that is setup prior to performing the read operations. If no fault occurs, the read operations complete successfully. Thus, under normal circumstances, no fault is incurred in a read operation on guest memory data structures. | 2017-12-28 |
20170371734 | MONITORING OF AN AUTOMATED END-TO-END CRASH ANALYSIS SYSTEM - A computer-implemented method for monitoring a crash analysis system is disclosed. Log messages are accessed pertaining to the operation of a crash analysis system for analyzing a core dump. The log messages are analyzed, at a processor, in order to generate operation results data. A graphic user interface for display on a computer is generated. The graphic user interface includes a graphical representation of the operation results data. | 2017-12-28 |
20170371735 | GRAPHICAL USER INTERFACE FOR SOFTWARE CRASH ANALYSIS DATA - A computer-implemented method for providing crash results for a computer system on a graphical user interface is disclosed. A component access control feature is displayed on a graphic user interface. The component access control feature enables a user to select a component and view crash results pertaining to the component. A graphical representation for display on the graphic user interface is generated. The graphical representation includes at least a portion of a signature back trace corresponding to a crash associated with the component. | 2017-12-28 |
20170371736 | COMPUTER CRASH RISK ASSESSMENT - A computer-implemented method assessing the risk of a future crash occurring on a computer system is disclosed. Crash results are received from a crash analysis system. The crash results are analyzed, at a processor, to determine the likelihood of the future crash occurring on the computer system. Information regarding the likelihood of the future crash occurring on the computer system is provided to a user of the computer system. | 2017-12-28 |
20170371737 | FAILURE DETECTION IN A PROCESSING SYSTEM - A first processor enters a control record in a database and then selects the control record and locks it with a pessimistic lock. If the first processor finishes its operations, it deletes the control record. A subsequent processor searches for the control record and attempts to lock it with a pessimistic lock. If the subsequent processor is successful in locking the control record, it determines that the first processor has failed in performing its process, and takes desired action. | 2017-12-28 |
20170371738 | SYSTEMS, METHODS AND DEVICES FOR STANDBY POWER SAVINGS - A power delivery system of a computing system that is on alternating current (AC) power limits software administrative tasks to a system-controlled and tunable broadcast window. This window limitation allows a computing system to enter and stay in low-power states without variable disturbances from administrative functions that can be relegated to the window. For example, maintenance is restricted until the computing system broadcasts a notification. Legacy software and devices that do not understand these notifications can be told the AC power is not present nominally, and then be notified of AC power presence during maintenance intervals. | 2017-12-28 |
20170371739 | PARITY FOR INSTRUCTION PACKETS - Systems and method of error checking for instructions method of error checking for instructions include an assembler for creating an instruction packet with one or more instructions, determining if a parity of the instruction packet matches a predesignated parity, and if the parity of the instruction packet does not match the predesignated parity, using a bit of the instruction packet to change parity of the instruction packet to match the predesignated parity. The instruction packet with the predesignated parity is stored in a memory, and may eventually be retrieved by a processor for execution. If there is an error in the instruction packet retrieved from the memory, the error is detected based on comparing the parity of the instruction packet to the predesignated parity. | 2017-12-28 |
20170371740 | MEMORY DEVICE AND REPAIR METHOD WITH COLUMN-BASED ERROR CODE TRACKING - A memory device is disclosed that includes a row of storage locations that form plural columns. The plural columns include data columns to store data and a tag column to store tag information associated with error locations in the data columns. Each data column is associated with an error correction location including an error code bit location. Logic retrieves and stores the tag information associated with the row in response to activation of the row. A bit error in an accessed data column is repaired by a spare bit location based on the tag information. | 2017-12-28 |
20170371741 | TECHNOLOGIES FOR PROVIDING FILE-BASED RESILIENCY - Technologies for providing file-based data resiliency include an apparatus having a memory to store file data and a processor to manage encode or decode operations on the file data. The processor is to determine an increase in file size to be allocated for a reserved portion of a file to be stored in the memory, generate an erasure code based on content of the file and the determined increase in file size, wherein the erasure code is to facilitate decorruption of the file, and write the erasure code to the reserved portion of the file. | 2017-12-28 |
20170371742 | STORAGE DEVICE - A storage device includes a nonvolatile memory device and a controller. A nonvolatile memory device includes a plurality of memory blocks. Each of the plurality of memory blocks is divided into a plurality of zones and is formed on a substrate. Each of the plurality of zones comprises one or more word lines. A controller performs a reliability verification read operation on a first zone of the plurality of zones of a memory block selected from the plurality of memory blocks if a number of read operations performed on the first zone reaches a first threshold value and performs the reliability verification read operation on a second zone of the plurality of zones of the selected memory block if a number of read operations performed on the second zone reaches a second threshold value. | 2017-12-28 |
20170371743 | SYSTEM AND METHOD FOR PROTECTING GPU MEMORY INSTRUCTIONS AGAINST FAULTS - A system and method for protecting memory instructions against faults are described. The system and method include converting the slave instructions to dummy operations, modifying memory arbiter to issue up to N master and N slave global/shared memory instructions per cycle, sending master memory requests to memory system, using slave requests for error checking, entering master requests to the GM/LM FIFO, storing slave requests in a register, and comparing the entered master requests with the stored slave requests. | 2017-12-28 |
20170371744 | NON-VOLATILE STORAGE SYSTEM USING TWO PASS PROGRAMMING WITH BIT ERROR CONTROL - A first phase of a programming process is performed to program data into a set of non-volatile memory cells using a set of verify references and allowing for a first number of programming errors. After completing the first phase of programming, an acknowledgement is provided to the host that the programming was successful. The memory system reads the data from the set of non-volatile memory cells and uses an error correction process to identify and correct error bits in the data read. When the memory system is idle and after the acknowledgement is provided to the host, the memory system performs a second phase of the programming process to program the corrected error bits into the set of the non-volatile memory cells using the same set of verify references and allowing for a second number of programming errors. | 2017-12-28 |
20170371745 | SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM - A semiconductor device may include an operation control circuit configured to generate a detection signal based on an internal temperature of the semiconductor device. The semiconductor device may include an error correction circuit configured to output read data as output data with or without performing an error correction operation and with or without performing a scrub operation based on the detection signal. | 2017-12-28 |
20170371746 | METHODS OF CORRECTING DATA ERRORS AND SEMICONDUCTOR DEVICES USED THEREIN - A semiconductor device correcting data errors using a hamming code is provided. The hamming code is realized by an error check matrix, and the error check matrix includes a first sub- matrix and a second sub-matrix. The first sub-matrix includes column vectors having an odd weight. The second sub-matrix includes an up matrix and a down matrix. Each of the up matrix and the down matrix includes column vectors having an odd weight. | 2017-12-28 |
20170371747 | SCALING QUORUM BASED REPLICATION SYSTEMS - A computer determines whether it has received user input or a node within a replica set has reached a capacity threshold. Based on receiving user input or determining that a node within a replica set has reached a capacity threshold, creating a snapshot of the data stored in the replica set and partitioning the data based on the created snapshot. The computer then initializing nodes within a new replica set and moves a partition from the original replica set to the new replica set before deleting the other partition from the old replica set. | 2017-12-28 |
20170371748 | SYSTEM AND METHOD FOR CREATING SELECTIVE SNAPSHOTS OF A DATABASE - A system is provided for creating selective snapshots of a database that is stored as one or more segments, wherein a segment comprises one or more memory pages. The system includes a memory storage comprising instructions and one or more processors in communication with the memory. The one or more processors execute the instructions to determine whether a snapshot process is configured to access a selected segment of the one or more segments, assign a positive mapping status to an accessed segment for which the determining unit has determined that it is accessed by the snapshot process and to assign a negative mapping status to a non-accessed segment, and create a snapshot comprises a step of forking the snapshot process with an address space that comprises a subset of the one or more segments. | 2017-12-28 |
20170371749 | BACKUP IMAGE RESTORE - An example apparatus includes a virtual drive controller module to receive a read request from a guest virtual machine (VM) during a restore operation. The apparatus also includes a virtual drive manager module to determine whether data associated with the read request is stored in a storage volume of the guest VM using a sector mapping lookup table during the restore operation. In response to a determination that the data is absent in the storage volume, the virtual drive manager module is to copy the data from a backup image associated with the guest VM to the storage volume, update the sector mapping lookup table to indicate that the data is stored in the storage volume, and transmit the data to the guest VM. | 2017-12-28 |
20170371750 | METHOD AND APPARATUS FOR RESTORING DATA FROM SNAPSHOTS - According to at least one aspect, a database system is provided. The database system includes at least one processor configured to receive a restore request to restore a portion of a dataset to a previous state and, responsive to receipt of the restore request, identify at least one snapshot from a plurality of snapshots of at least some data in the dataset to read based on the restore request and write a portion of the data in the identified at least one snapshot to the dataset to restore the portion of the dataset to the previous state. | 2017-12-28 |
20170371751 | RELATIONAL DATABASE RECOVERY - A database recovery and index rebuilding method involves reading data pages for a database to be recovered as recovery bases; retrieving all log records from stored post-backup updates and sorting the retrieved log records; as the data pages to be recovered are read, applying the sorted log records to their respective data pages; as the applying completes for individual data pages, extracting and sorting index keys from the individual data pages for which the applying is complete, until all index keys have been extracted from all individual data pages and sorted; on an individual recovered page basis, writing the recovered individual data pages into the database; and when all index keys have been extracted and sorted from all of the recovered individual data pages, rebuilding indexes of the database using the sorted index keys and writing the rebuilt indexes to the non-transitory storage. | 2017-12-28 |
20170371752 | CLOUD STORAGE WRITE CACHE MANAGEMENT SYSTEM AND METHOD - A method, computer program product, and computer system for monitoring health of at least one storage device of a cache in a clustered system. A recovery journal may be maintained, wherein the recovery journal may identify whether one or more chunks of data stored in the cache have been dumped from the at least one storage device to persistent storage in the clustered system. A state of the at least one storage device may be determined based upon, at least in part, the health of the at least one storage device. A recovery action may be performed on the one or more chunks of data stored in the at least one storage device based upon, at least in part, the state of the at least one storage device. | 2017-12-28 |
20170371753 | MEMORY APPARATUS FOR APPLYING FAULT REPAIR BASED ON PHYSICAL REGION AND VIRTUAL REGION AND CONTROL METHOD THEREOF - Provided are a memory apparatus for applying fault repair based on a physical region and a virtual region and a control method thereof. That is, the fault repair is applied based on the physical region and the virtual region which use an information storage table of a virtual basic region using a hash function, thereby improving efficiency of the fault repair. | 2017-12-28 |
20170371754 | Fault Tolerant Communication System - Described is a differential data bus system which maintains error free communication despite faults in one of the data bus lines. | 2017-12-28 |
20170371755 | NON-VOLATILE MEMORY WITH DYNAMIC REPURPOSE OF WORD LINE - A non-volatile memory system includes a plurality of non-volatile data memory cells arranged into groups of data memory cells, a plurality of select devices connected to the groups of data memory cells, a selection line connected to the select devices, a plurality of data word lines connected to the data memory cells, and one or more control circuits connected to the selection line and the data word lines. The one or more control circuits are configured to determine whether the select devices are corrupted. If the select devices are corrupted, then the one or more control circuits repurpose one of the word lines (e.g., the first data word line closet to the select devices) to be another selection line, thus operating the memory cells connected to the repurposed word line as select devices. | 2017-12-28 |
20170371756 | MONITOR PERIPHERAL DEVICE BASED ON IMPORTED DATA - Various examples described herein provide for monitoring a peripheral device by data imported from the peripheral device. The peripheral data may comprise a script associated with monitoring or managing the peripheral device, or descriptive data describing a set of monitor values on the peripheral device. | 2017-12-28 |
20170371757 | SYSTEM MONITORING METHOD AND APPARATUS - A system monitoring method and apparatus comprises: collecting periodically status indicator data of a monitored system to generate a status indicator data sequence; selecting predetermined pieces of status indicator data according to data collecting time in a reverse chronological order; determining a category from predetermined categories, the predetermined pieces of status indicator data belonging to the determined category; selecting, from the historical status indicator data, status indicator data belonging to the determined category and obtained in a collection period as characteristic data of the determined category; calculating a predicted value of a status indicator of the system at a predicting moment using the characteristic data; and determining whether the system is abnormal, based on a difference between the calculated predicted value and a true value of the status indicator of the system collected at the predicting moment. The present implementation can accurately find the abnormality of the system rapidly. | 2017-12-28 |
20170371758 | TECHNIQUES FOR ACCURATELY APPRISING A USER OF PROGRESS IN BOOTING A VIRTUAL APPLIANCE - A method, performed by a computing device, includes (a) building a data structure that describes dependence relationships between components of a virtual appliance, the components comprising respective computational processes which may be invoked during booting, a dependence relationship indicating that one component must complete before a second component may be invoked, (b) identifying, with reference to the data structure and an essential set of components which were pre-defined to be essential to the virtual appliance, a set of components that must complete for booting to be considered finished, and, after identifying the required set of components, repeatedly (c) querying each required component for its respective completion status, (d) calculating an estimated completion percentage for booting the virtual appliance with reference to the respective completion statuses of each required component versus all required components, and (e) displaying an indication of the completion percentage to a user via a user interface. | 2017-12-28 |
20170371759 | INTENT-BASED INTERACTION WITH CLUSTER RESOURCES - Aspects extend to methods, systems, and computer program products for intent-based interactions with cluster resources. One or more computer systems are joined in a computer system cluster to provide defined computing functionality (e.g., storage, compute, network, etc.) to an external system. In one aspect, a data collection intent facilitates collection and aggregation of data to form a health report for one or more components of the computer system cluster. In another aspect, a command intent facilitates implementing a command at one or more components of the computer system cluster. Services span machines of the computer system cluster to abstract lower level aspects of data collection and aggregation and command implementation for higher level aspects of data collection and aggregation and command implementation. Services can be integrated into an operating system to relieve users from having to have operating system knowledge. | 2017-12-28 |