32nd week of 2015 patent applcation highlights part 43 |
Patent application number | Title | Published |
20150220311 | COMPUTER IMPLEMENTED MODELING SYSTEM AND METHOD - A computer implemented modeling method and system that includes using a visual programming language to create a topological framework model configured to spatially arrange a set of one more agent submodels and incorporate an environmental property submodel for each position of the topological framework model. The method further includes capturing the topological framework model by converting elements of the visual programming language into a textual programming language. | 2015-08-06 |
20150220312 | GENERATING IDENTIFIERS FOR USER INTERFACE ELEMENTS OF A WEB PAGE OF A WEB APPLICATION - Disclosed are database systems, methods, and computer program products for generating identifiers for user interface elements of a web page of a web application. In some implementations, a server of a database system analyzes a copy of source code for a first web page. The first web page may comprise user interface elements capable of being generated from the source code. The server identifies one or more of the user interface elements of the first web page as not having a unique identifier or as having a dynamically generated identifier. The server generates, for each identified user interface element, a further unique identifier to be associated with the respective identified user interface element. The server generates edited source code comprising one or more further unique identifiers for the identified one or more user interface elements. The server stores the edited source code in a database of the database system. | 2015-08-06 |
20150220313 | REGISTER LIVENESS ANALYSIS FOR SIMD ARCHITECTURES - Systems and methods of allocating physical registers to variables may involve identifying a partial definition of a variable in an inter-procedural control flow graph. A determination can be made as to whether to terminate a live range of the variable based at least in part on the partial definition. Additionally, a physical register may be allocated to the variable based at least in part on the live range. | 2015-08-06 |
20150220314 | CONTROL FLOW OPTIMIZATION FOR EFFICIENT PROGRAM CODE EXECUTION ON A PROCESSOR - A method includes identifying a divergent region of interest (DRI) not including a post dominator node thereof within a control flow graph, and introducing a decision node in the control flow graph such that the decision node post-dominates an entry point of the DRI and is dominated by the entry point. The method also includes redirecting a regular control flow path within the control flow graph from another node previously coupled to the DRI to the decision node, and redirecting a runaway path from the another node to the decision node. Further, the method includes marking the runaway path to differentiate the runaway path from the regular control flow path, and directing control flow from the decision node to an originally intended destination of each of the regular control flow path and the runaway path based on the marking to provide for program thread synchronization and optimization within the DRI. | 2015-08-06 |
20150220315 | METHOD AND APPARATUS FOR COMPILING - A compiling apparatus generates a dependency tree representing dependency relations among a plurality of instructions included in first code. The compiling apparatus detects, from the dependency tree, a partial tree including a first instruction, a second instruction, and a third instruction that depends on the operation results of the first and second instructions, and rewrites the instructions corresponding to the partial tree to a set of instructions including a plurality of complex instructions each of which causes a processor to perform a complex operation including a plurality of operations. The compiling apparatus generates second code on the basis of the dependency tree and the set of instructions. | 2015-08-06 |
20150220316 | APPLICATION PROGRAM EVANESCENCE ON A COMPUTING DEVICE - Application programs are automatically uninstalled from a computing device based upon contextual information that is inconsistent with their continued availability. Programs are purchased for a limited context where, once the context is no longer valid, the application program is automatically uninstalled. Alternatively, or in addition, context under which an application program is automatically uninstalled is user-specified when the application is initially purchased or installed, or at a subsequent time. Subsequently, users are notified when an application is uninstalled, or is about to be uninstalled. Such a notification provides the user with an opportunity to override the automatic uninstallation or, alternatively, or in addition, provides the user with access to alternative, or supplemental, application programs. Information associated with automatic application uninstallation is retained in an application manifest generated by the application author, or by processes executing on the computing device, and is stored as part of the application, or separately. | 2015-08-06 |
20150220317 | METHOD, EQUIPMENT AND SYSTEM OF INCREMENTAL UPDATE - The disclosure discloses a method, equipment and system for incremental updates in the information processing technology. The method includes: unpacking a new version installation package to get a new version unpacked folder having a new version unpacked file and a new version signature subfolder having a new version unpacked file; obtaining header file information of the at least one new version unpacked, and converting a format of the header file information; packing the new version convert folder to a new version archive package and obtaining at one historical version archive package; generating and obtaining one differential file; and releasing the one differential file wherein the at least one differential file that is released is selected by a client that has memory and at least one processor to download and form a second new version installation package according to the at least one differential file that is downloaded. | 2015-08-06 |
20150220318 | WIRELESS FIRMWARE UPGRADES TO AN ALARM SECURITY PANEL - A panel is described including stored data that is associated with the operation of the panel, and a server configured to provide a notification that an update to the data is available, the notification provided over a first communication network, and provide an update to the data via a wireless communication with the panel over a second communication network different than the first communication network. | 2015-08-06 |
20150220319 | Method and System for Updating a Firmware of a Security Module - A method for updating a firmware of a security module in equipment comprises a device and the security module arranged such that data can be exchanged between the security module and the device. A first message is received by the security module and indicates the availability of a firmware update provided by a provider and wherein the first message contains a transaction number individual for the security module. A second message is transferred from the equipment to the provider and the firmware update is requested from the provider. The second message contains the individual transaction number to enable the provider to conduct an identification of the security module. The firmware update is transferred from the provider to the equipment based on the individual transaction number, and is stored in a memory of the device. The firmware is unpacked by a boot loader of the equipment or the security module. | 2015-08-06 |
20150220320 | METHOD AND APPARATUS FOR PATCHING - The present invention belongs to the computer field and discloses a method for patching, the method comprising: in response to that there is a need to patch a first content and the first content has been in the memory, distinguishing between a new content and an old content, the new content being the patched first content, the old content being the first content that has been in the memory; and in response to that the new content is loaded to the memory, mapping to the new content a new process that needs to apply the first content, wherein the new process comprises a process that is started after loading the new content to the memory. The present invention further discloses an apparatus for patching. With the technical solution provided by the present invention, it is possible to perform dynamic patching to a virtual machine or a physical machine without stopping a running process. | 2015-08-06 |
20150220321 | METHOD OF UPDATING SOFTWARE FOR VEHICLE - Disclosed is a method of updating software for a vehicle. The method includes determining whether a vehicle terminal of the vehicle is running out-of-date software; selecting a target vehicle among neighboring vehicles via wireless communication, wherein the target vehicle is running updated software; receiving a shared update file from the target vehicle via wireless communication, the shared update file based on the updated software; storing the shared update file; and updating the out-of-date software using the shared updated file. | 2015-08-06 |
20150220322 | SYSTEM AND METHOD FOR REINSTALLING, UPGRADING OR DOWNGRADING AN OPERATING SYSTEM - A method and device for installing, reinstalling, upgrading, or downgrading an operating system. The method including the steps of: mounting, on a computing device having a primary memory and a secondary memory storing a first operating system, a virtual disk in the primary memory; installing, on the virtual disk an installation operating system; staging in the primary memory a desired operating system; staging in the primary memory an installation file configured to install the desired operating system in the secondary memory; and executing the installation file to install the desired operating system in the secondary memory. | 2015-08-06 |
20150220323 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING SYSTEM - An information processing apparatus includes a storage unit that stores combination information relevant to combinations of different types of first programs that can be installed in a device, the combination information including memory consumption amounts of the combinations; a receiving unit that receives an install target first program and device information relevant to the device; and a determining unit that determines validity of installing the install target first program in the device by determining, by referring to the combination information, a predicted memory consumption amount corresponding to a first combination including the install target first program and an existing first program that is installed in the device and indicated in the device information, and by comparing the predicted memory consumption amount with a device memory consumption amount of the device indicated in the device information. | 2015-08-06 |
20150220324 | UPDATING SOFTWARE PRODUCTS ON VIRTUAL MACHINES WITH SOFTWARE IMAGES OF NEW LEVELS - A solution for updating at least one software product installed on a virtual machine, including providing a software image of at least one new virtual disk storing a new level of the software product together with new metadata including an indication of at least one new activation procedure of the new level of the software product, and replacing a current level of the software product with the new level of the software product, the current level of the software product being stored in at least one current virtual disk of the virtual machine together with current metadata including current activation information of the current level of the software product, where the replacing includes removing at least one current virtual disk from the virtual machine; adding at least one new virtual disk to the virtual machine; and running at least one new activation procedure according to the current activation information. | 2015-08-06 |
20150220325 | AGILE FRAMEWORK FOR VERTICAL APPLICATION DEVELOPMENT AND DELIVERY - A software development platform comprising one or more user-selectable modular units containing a vertical stack of back-end business logic. One or more user-selectable modular units containing domain model components. One or more user-selectable modular units containing front end presentation components. A virtual appliance comprising application-specific logic that includes one or more of the modular units containing the vertical stack of back-end business logic, one or more of the user-selectable modular units containing the domain model components and one or more of the user-selectable modular units containing front end presentation components. | 2015-08-06 |
20150220326 | Mobile Terminal and Software Upgrade Method Thereof - A mobile terminal and a software upgrade method thereof are provided. The method includes acquiring a differential upgrade package for software of an original version; and using the software of the original version as software of a reference version, differentially upgrading, by using the differential upgrade package, the software of the reference version to software of an upgrade version subsequently used by a mobile terminal, and retaining the software of the original version at the same time. According to the foregoing disclosed content, in technical solutions disclosed in the embodiments of the present invention, the software of the original version can be retained to ensure that the software of the reference version is unchanged, thereby effectively resolving a problem that the software of the original version cannot be retained and a reference version subsequently upgraded is disorderly controlled. | 2015-08-06 |
20150220327 | EXTENSIBLE DATA MODEL AND SERVICE FOR INFRASTRUCTURE MANAGEMENT - A method for defining new resource types in an operating software system, comprising electronically modifying a secured entity table to add a new resource. Electronically modifying a secured entity action table to add the new resource. Electronically modifying a resource type table to add the new resource. Electronically modifying a resource relation table to add the new resource relationships. Electronically flushing one or more runtime caches to deploy the new resource without recompiling the software system. Electronically detect and handle compatible and incompatible schema upgrades. | 2015-08-06 |
20150220328 | SYSTEM FOR DETECTING CALL STACK TAMPERING - The invention relates to a method for detecting a subroutine call stack modification, including the steps of, when calling a subroutine, placing a return address at the top of the stack; at the end of the subroutine, using the address at the top of the stack as the return address, and removing the address from the stack; when calling the subroutine, accumulating the return address in a memory location with a first operation; at the end of the subroutine, accumulating the address from the top of the stack in the memory location with a second operation, reciprocal of the first operation; and detecting a change when the content of the memory location is different from its initial value. | 2015-08-06 |
20150220329 | SYSTEM AND METHOD TO MAP DEFECT REDUCTION DATA TO ORGANIZATIONAL MATURITY PROFILES FOR DEFECT PROJECTION MODELING - A method is implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions. The programming instructions are operable to receive a maturity level for an organization and select at least one defect analysis starter/defect reduction method (DAS/DRM) defect profile based on the maturity level. Additionally, the programming instructions are operable to determine a projection analysis for one or more stages of the life cycle of a software code project of the organization based on the at least one DAS/DRM defect profile. | 2015-08-06 |
20150220330 | TEMPLATE DERIVATION FOR CONFIGURATION OBJECT MANAGEMENT - A method for template derivation comprising generating a plurality of templates using a processor, each template having a plurality of unfixed attributes. The plurality of templates are stored in a non-transient data memory. One of the plurality of templates is retrieved using the processor. One or more of the unfixed attributes of the retrieved template is fixed. The modified template is stored as a new template having a plurality fixed attributes and a plurality of unfixed attributes. | 2015-08-06 |
20150220331 | RESOLVING MERGE CONFLICTS THAT PREVENT BLOCKS OF PROGRAM CODE FROM PROPERLY BEING MERGED - This disclosure relates to resolving merge conflicts that prevent blocks of program code from properly being merged. A merge conflict that prevents blocks of program code from properly being merged can be identified. Responsive to identifying the merge conflict, a pattern of a respective portion of at least one of the blocks of program code can be identified, and a determination can be made as to whether the pattern matches an existing merge rule. Responsive to determining that the pattern matches the existing merge rule, the existing merge rule can be validated against a syntax of the portion of at least one of the blocks of program code. Responsive to the existing merge rule successfully validating against the syntax of the portion of at least one of the blocks of program code, the existing merge rule can be applied to resolve the merge conflict. | 2015-08-06 |
20150220332 | RESOLVING MERGE CONFLICTS THAT PREVENT BLOCKS OF PROGRAM CODE FROM PROPERLY BEING MERGED - This disclosure relates to resolving merge conflicts that prevent blocks of program code from properly being merged. A merge conflict that prevents blocks of program code from properly being merged can be identified. Responsive to identifying the merge conflict, a pattern of a respective portion of at least one of the blocks of program code can be identified, and a determination can be made as to whether the pattern matches an existing merge rule. Responsive to determining that the pattern matches the existing merge rule, the existing merge rule can be validated against a syntax of the portion of at least one of the blocks of program code. Responsive to the existing merge rule successfully validating against the syntax of the portion of at least one of the blocks of program code, the existing merge rule can be applied to resolve the merge conflict. | 2015-08-06 |
20150220333 | GENERATION OF API CALL GRAPHS FROM STATIC DISASSEMBLY - Data is received that includes at least a portion of a program. Thereafter, entry point locations and execution-relevant metadata of the program are identified and retrieved. Regions of code within the program are then identified using static disassembly and based on the identified entry point locations and metadata. In addition, entry points are determined for each of a plurality of functions. Thereafter, a set of possible call sequences are generated for each function based on the identified regions of code and the determined entry points for each of the plurality of functions. Related apparatus, systems, techniques and articles are also described. | 2015-08-06 |
20150220334 | Single Code Set Applications Executing In A Multiple Platform System - Embodiments of the claimed subject matter are directed to methods and a system that allows an application comprising a single code set under the COBOL Programming Language to execute in multiple platforms on the same multi-platform system (such as a mainframe). In one embodiment, a single code set is pre-compiled to determine specific portions of the code set compatible with the host (or prospective) platform. Once the code set has been pre-compiled to determine compatible portions, those portions may be compiled and executed in the host platform. According to these embodiments, an application may be executed from a single code set that is compatible with multiple platforms, thereby potentially reducing the complexity of developing the application for multiple platforms. | 2015-08-06 |
20150220335 | METHOD AND SYSTEM FOR ANALYZING AN EXTENT OF SPEEDUP ACHIEVABLE FOR AN APPLICATION IN A HETEROGENEOUS SYSTEM - The present disclosure includes, in a heterogeneous system, receiving a desired speedup of an application as input and performing a static analysis and a dynamic analysis of the application. The dynamic analysis of the application comprises, identifying a set of parameters including, an end-to-end execution time of the application, an execution time of data parallel loops in the application, an execution time of non-data parallel loops in the application, and an amount of physical memory used by each data structure in each data parallel loop. Dynamic analysis also includes calculating and providing the feasibility of achieving the desired speedup of the application based on the identified set of parameters, and satisfaction of each of, an initialization invariant, a data-parallel invariant and a data transfer invariant. | 2015-08-06 |
20150220336 | SYSTEMS AND METHODS FOR IDENTIFYING SOFTWARE PERFORMANCE INFLUENCERS - Described are a system and method for identifying variables which impact performance of software under development. Data is collected that is related to performance characteristics of the software under development. Performance change gradients are determined between previous builds of the software under development. A set of performance change factors are generated from the collected data that corresponds to each performance change gradient. Performance characteristic data corresponding to a current build of the software under development are compared to the performance change gradients. At least one fault component from the set of performance change factors that influences performance of the current build is output in response to the comparison between the performance characteristic data corresponding to the current build and the plurality of performance change gradients. | 2015-08-06 |
20150220337 | DEVICE, SYSTEM AND METHOD FOR CONTROLLING AN OPERATION | 2015-08-06 |
20150220338 | SOFTWARE POLLING ELISION WITH RESTRICTED TRANSACTIONAL MEMORY - Generally, this disclosure provides systems, devices, methods and computer readable media for software polling elision with restricted transactional memory. The device may include a restricted transactional memory (RTM) processor configured to monitor a region associated with a transaction and to enable an abort of the transaction, wherein the abort nullifies modifications to the region, the modifications associated with processing within the transaction prior to the abort. The device may also include a code module configured to: produce a first request; send the first request to an external processing entity; enter the transaction; produce a second request; commit the transaction in response to a completion indication from the external processing entity; and abort the transaction in response to a non-completion indication from the external entity. | 2015-08-06 |
20150220339 | ON-THE-FLY CONVERSION DURING LOAD/STORE OPERATIONS IN A VECTOR PROCESSOR - Systems and methods for performing on-the-fly format conversion on data vectors during load/store operations are described herein. In one embodiment, a method for loading a data vector from a memory into a vector unit comprises reading a plurality of samples from the memory, wherein the plurality of samples are packed in the memory. The method also comprises unpacking the samples to obtain a plurality of unpacked samples, performing format conversion on the unpacked samples in parallel, and sending at least a portion of the format-converted samples to the vector unit. | 2015-08-06 |
20150220340 | TECHNIQUES FOR HETEROGENEOUS CORE ASSIGNMENT - Various embodiments are generally directed to techniques for assigning instances of blocks of instructions of a routine to one of multiple types of core of a heterogeneous set of cores of a processor component. An apparatus to select types of cores includes a processor component; a core selection component for execution by the processor component to select a core of multiple cores to execute an initial subset of multiple instances of an instruction block in parallel based on characteristics of instructions of the instruction block, and to select a core of the multiple cores to execute remaining instances of the multiple instances of the instruction block in parallel based on characteristics of execution of the initial subset stored in an execution database; and a monitoring component for execution by the processor component to record the characteristics of execution of the initial subset in the execution database. Other embodiments are described and claimed. | 2015-08-06 |
20150220341 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR IMPLEMENTING SOFTWARE-BASED SCOREBOARDING - A system, method, and computer program product are provided for implementing a software-based scoreboarding mechanism. The method includes the steps of receiving a dependency barrier instruction that includes an immediate value and an identifier corresponding to a first register and, based on a comparison of the immediate value to the value stored in the first register, dispatching a subsequent instruction to at least a first processing unit of two or more processing units. | 2015-08-06 |
20150220342 | METHOD AND APPARATUS FOR ENABLING A PROCESSOR TO GENERATE PIPELINE CONTROL SIGNALS - A chaining bit decoder of a computer processor receives an instruction stream. The chaining bit decoder selects a group of instructions from the instruction stream. The chaining bit decoder extracts a designated bit from each instruction of the instruction stream to produce a sequence of chaining bits. The chaining bit decoder decodes the sequence of chaining bits. The chaining bit decoder identifies zero or more instruction stream dependencies among the selected group of instructions in view of the decoded sequence of chaining bits. The chaining bit decoder outputs control signals to cause one or more pipelines stages of the processor to execute the selected group of instructions in view of the identified zero or more instruction stream dependencies among the group sequence of instructions. | 2015-08-06 |
20150220343 | Computer Processor Employing Phases of Operations Contained in Wide Instructions - A computer processor employs an instruction processing pipeline that processes a sequence of wide instructions each having an encoding that represents a plurality of different operations. The plurality of different operations of the given wide instruction are logically organized into a number of phases having a predefined ordering such that at least one operation of the given wide instruction produces data that is consumed by at least one other operation of the given wide instruction. In certain circumstances where stalling is absent, the plurality of different operations of the phases of the given wide instruction can be issued for execution by the instruction processing pipeline over a plurality of consecutive machine cycles. | 2015-08-06 |
20150220344 | Memory Systems and Memory Control Methods - Memory systems and memory control methods are described. According to one aspect, a memory system includes a plurality of memory cells individually configured to store data, program memory configured to store a plurality of first executable instructions which are ordered according to a first instruction sequence and a plurality of second executable instructions which are ordered according to a second instruction sequence, substitution circuitry configured to replace one of the first executable instructions with a substitute executable instruction, and a control unit configured to execute the first and second executable instructions to control reading and writing of the data with respect to the memory, wherein the control unit is configured to execute the first executable instructions according to the first instruction sequence, to execute the substitute executable instruction after the execution of the first executable instructions, and to execute the second executable instructions according to the second instruction sequence as a result of execution of the substitute executable instruction. | 2015-08-06 |
20150220345 | VECTOR MASK DRIVEN CLOCK GATING FOR POWER EFFICIENCY OF A PROCESSOR - A processor includes an instruction schedule and dispatch (schedule/dispatch) unit to receive a single instruction multiple data (SIMD) instruction to perform an operation on multiple data elements stored in a storage location indicated by a first source operand. The instruction schedule/dispatch unit is to determine a first of the data elements that will not be operated to generate a result written to a destination operand based on a second source operand. The processor further includes multiple processing elements coupled to the instruction schedule/dispatch unit to process the data elements of the SIMD instruction in a vector manner, and a power management unit coupled to the instruction schedule/dispatch unit to reduce power consumption of a first of the processing elements configured to process the first data element. | 2015-08-06 |
20150220346 | OPPORTUNITY MULTITHREADING IN A MULTITHREADED PROCESSOR WITH INSTRUCTION CHAINING CAPABILITY - A computing device determines that a current software thread of a plurality of software threads having an issuing sequence does not have a first instruction waiting to be issued to a hardware thread during a clock cycle. The computing device identifies one or more alternative software threads in the issuing sequence having instructions waiting to be issued. The computing device selects, during the clock cycle by the computing device, a second instruction from a second software thread among the one or more alternative software threads in view of determining that the second instruction has no dependencies with any other instructions among the instructions waiting to be issued. Dependencies are identified by the computing device in view of the values of a chaining bit extracted from each of the instructions waiting to be issued. The computing device issues the second instruction to the hardware thread. | 2015-08-06 |
20150220347 | DETERMINISTIC AND OPPORTUNISTIC MULTITHREADING - A processing device identifies a set of software threads having instructions waiting to issue. For each software thread in the set of the software threads, the processing device binds the software thread to an available hardware context in a set of hardware contexts and stores an identifier of the available hardware context bound to the software thread to a next available entry in an ordered list. The processing device reads an identifier stored in an entry of the ordered list. Responsive to an instruction associated with the identifier having no dependencies with any other instructions among the instructions waiting to issue, the processing device issues the instruction waiting to issue to the hardware context associated with the identifier. | 2015-08-06 |
20150220348 | COMPUTING SYSTEM INITIATION - Systems, methods, and software described herein facilitate the implementation of discrete machines in a distributed data processing environment. In one example, one or more new computing devices may attempt to join the environment by transferring a Preboot Execution Environment (PXE) request to an administration system. The administration system is configured to receive the request and, in response to the request, identify boot preferences corresponding to the PXE request. The administration system is further configured to transfer boot information to the one or more computing devices based on the boot preferences. | 2015-08-06 |
20150220349 | INFORMATION PROCESSING SYSTEM, MANAGEMENT APPARATUS, AND METHOD OF CONTROLLING PROGRAMS - An information processing system is provided, including: a plurality of information processing apparatuses; and a management apparatus that manages a plurality of boot-up programs used to boot up the plurality of information processing apparatuses, the management apparatus including: a storage that stores the plurality of boot-up programs; a configuration information obtaining unit that obtains configuration information of a first information processing apparatus of the plurality of information processing apparatuses; a selector that selects a boot-up program corresponding to the first information processing apparatus from the plurality of boot-up programs stored in the storage, based on the obtained configuration information; and a transmitter that sends the boot-up program selected by the selector to the first information processing apparatus. | 2015-08-06 |
20150220350 | INFORMATION PROCESSING DEVICE AND METHOD FOR MANAGING INFORMATION PROCESSING DEVICE - An information processing system includes a plurality of information processing devices housed in a housing device, and a storage device that is mounted on the housing device and that stores pieces of start-up control information that each specify, for each of the plurality of information processing devices, a process to be performed by each of the plurality of information processing devices at the time of start-up, wherein an information processing device of the plurality of information processing devices, is configured to obtain corresponding start-up control information from the pieces of the start-up control information stored in the storage device at the time of the start-up of the information processing device, and perform a process that is specified by the obtained start-up control information. | 2015-08-06 |
20150220351 | SYSTEM AND METHOD FOR PROVIDING AN IMAGE TO AN INFORMATION HANDLING SYSTEM - A system and method for providing an image to an information handling system is disclosed. A method for delivering an image may include booting an information handling system with a provisioning operating system downloaded via a network into a memory of the information handling system. The method may also include calculating, by the second provisioning OS, a fingerprint of an image stored on the information handling system. The method may additionally include determining if the fingerprint matches a previously-calculated fingerprint of the image calculated prior to delivery of the information handling system to its intended destination. The method may further include enabling the information handling system to boot from a storage resource of the information handling system in response to a determination that the fingerprint matches the previously-calculated fingerprint. | 2015-08-06 |
20150220352 | Method and System for Executing Third-Party Agent Code in a Data Processing System - In one embodiment, agent routines are executed in a first thread, each agent routine being invoked by one of coroutines. A agent processor executes in a second thread the agents associated with the one or more agent routines, including receiving a first yield signal from a first of the coroutines indicating that a first of the agent routines yields to perform a first action that requires an action simulation, in response to the first yield signal, suspending the first coroutine, selecting a second of the coroutines from a head of a first agent queue maintained by the agent processor, and executing the second coroutine by assigning the execution lock to the second coroutine. In a third thread, a simulator simulates the first action on behalf of the first coroutine and signals the agent processor to resume the first coroutine after completing the simulation of the first action. | 2015-08-06 |
20150220353 | TECHNOLOGIES FOR OPERATING SYSTEM TRANSITIONS IN MULTIPLE-OPERATING-SYSTEM ENVIRONMENTS - Technologies for transitioning between operating systems include a computing device having a main memory and a data storage device. The computing device executes a first operating system and monitors for an operating system toggle event. The toggle event may be a software command, a hardware buttonpress, or other user command. In response to the toggle event, the computing device copies state data of the first operating system to a reserved memory area. After copying the state data, the computing device executes a second operating system. While the second operating system is executing, the computing device copies the state data of the first operating system from the reserved memory area to the data storage device. The computing device monitors for operating system toggle events during execution of the second operating system and may similarly toggle execution back to the first operating system. Other embodiments are described and claimed. | 2015-08-06 |
20150220354 | Dynamic I/O Virtualization - A system and method for providing dynamic I/O virtualization is herein disclosed. According to one embodiment, a device capable of performing hypervisor-agnostic and device-agnostic I/O virtualization includes a host computer interface, memory, I/O devices (GPU, disk, NIC), and efficient communication mechanisms for virtual machines to communicate their intention to perform I/O operations on the device. According to one embodiment, the communication mechanism may use shared memory. According to some embodiments, the device may be implemented purely in hardware, in software, or using a combination of hardware and software. According to some embodiments, the device may share its memory with guest processes to perform optimizations including but not limited to a shared page cache and a shared heap. | 2015-08-06 |
20150220355 | METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION - A hypervisor virtual server system, including a plurality of virtual servers, a plurality of virtual disks that are read from and written to by the plurality of virtual servers, a physical disk, an I/O backend coupled with the physical disk and in communication with the plurality of virtual disks, which reads from and writes to the physical disk, a tapping driver in communication with the plurality of virtual servers, which intercepts I/O requests made by any one of said plurality of virtual servers to any one of said plurality of virtual disks, and a virtual data services appliance, in communication with the tapping driver, which receives the intercepted I/O write requests from the tapping driver, and that provides data services based thereon. | 2015-08-06 |
20150220356 | SECURE MIGRATION OF VIRTUAL MACHINES - Technologies are generally described for the secure live migration of virtual machines. The migration may take place in the context of, for example, public clouds. In various embodiments, by using a hidden process incorporated in a virtual machine's kernel and a trusted wireless and/or wired positioning service, a cloud provider and/or cloud user may be alerted about possible virtual machine hijacking/theft. The provider or user may also be provided with an approximate physical location of the platform running the compromised virtual machine for further investigation and enforcement measures. | 2015-08-06 |
20150220357 | Tagging Physical Resources In A Cloud Computing Environment - A cloud system may create physical resource tags to store relationships between cloud computing offerings, such as computing service offerings, storage offerings, and network offerings, and the specific physical resources in the cloud computing environment. Cloud computing offerings may be presented to cloud customers, the offerings corresponding to various combinations of computing services, storage, networking, and other hardware or software resources. After a customer selects one or more cloud computing offerings, a cloud resource manager or other component within the cloud infrastructure may retrieve a set of tags and determine a set of physical hardware resources associated with the selected offerings. The physical hardware resources associated with the selected offerings may be subsequently used to provision and create the new virtual machine and its operating environment. | 2015-08-06 |
20150220358 | ARRANGEMENT AND METHOD FOR THE ALLOCATION OF RESOURCES OF A PLURALITY OF COMPUTING DEVICES - An arrangement configured to allocate resources of a host system to one or more virtual machines, the arrangement comprising: an interface configured to receive a first request from a client system for a first amount of a resource of a host system to be allocated to a first virtual machine and to transmit confirmation to the client system of the allocation of the first amount of the resource; and a hypervisor module configured to allocate an amount of the resource of the host system to the first virtual machine, wherein the amount of the resource allocated to the first virtual machine is less than the first amount of the resource, such that at least a part of the first amount of the resource is available for allocation to a second virtual machine. | 2015-08-06 |
20150220359 | VIRTUALIZATION AND DYNAMIC RESOURCE ALLOCATION AWARE STORAGE LEVEL REORDERING - A system and method for reordering storage levels in a virtualized environment includes identifying a virtual machine (VM) to be transitioned and determining a new storage level order for the VM. The new storage level order reduces a VM live state during a transition, and accounts for hierarchical shared storage memory and criteria imposed by an application to reduce recovery operations after dynamic resource allocation actions. The new storage level order recommendation is propagated to VMs. The new storage level order applied in the VMs. A different storage-level order is recommended after the transition. | 2015-08-06 |
20150220360 | METHOD AND AN APPARATUS FOR PRE-FETCHING AND PROCESSING WORK FOR PROCESOR CORES IN A NETWORK PROCESSOR - A method and a system embodying the method for pre-fetching and processing work for processor cores in a network processor, comprising requesting pre-fetch work by a requestor; determining that the work may be pre-fetched for the requestor; searching for the work to pre-fetch; and pre-fetching the found work into one of one or more pre-fetch work-slots associated with the requestor is disclosed. | 2015-08-06 |
20150220361 | PARALLEL COMPUTER SYSTEM, CONTROL METHOD OF PARALLEL COMPUTER SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM - A parallel computer system includes: a plurality of computation nodes; and a management node that includes a memory and a processor coupled to the memory, wherein the processor is configured to: tentatively assign a computation node to an emergency job, allow scheduling of a further job to be performed while setting tentative assignment information that indicates a tentative assignment state to the emergency job and the tentatively assigned computation node when a job that is being executed in the computation node is swapped out in order to assign the computation node to the emergency job preferentially, and perform scheduling based on the tentative assignment information in order of the emergency job, a swap-in standby job, and a further job when scheduling of jobs is performed, and control execution of the jobs based on the scheduling of the jobs, which is performed by the processor. | 2015-08-06 |
20150220362 | MULTI-CORE PROCESSOR SYSTEM, ELECTRICAL POWER CONTROL METHOD, AND COMPUTER PRODUCT FOR MIGRATING PROCESS FROM ONE CORE TO ANOTHER - A multi-core processor system includes a core configured to detect that among cores different from a specific core executing a specific process, a given software different from specific software having a function equivalent to the specific process, is under execution; extract, from a database storing required computing capacities for the plural software and upon detecting that a given software is under execution, requirement values indicating the required computing capacity of the specific software and of the given software; judge for each the cores, whether a sum of the required computing capacities of the specific software and the software is at most a computing capacity value of the core; assign the specific software to a core for which the sum of the required computing capacities is judged to be at most the computing capacity value of the core; and suspend the specific core, upon assigning the specific software to the core. | 2015-08-06 |
20150220363 | Efficient Resource Utilization in Data Centers - A method includes identifying high-availability jobs and low-availability jobs that demand usage of resources of a distributed system. The method includes determining a first quota of the resources available to low-availability jobs as a quantity of the resources available during normal operations, and determining a second quota of the resources available to high-availability jobs as a quantity of the resources available during normal operations minus a quantity of the resources lost due to a tolerated event. The method includes executing the jobs on the distributed system and constraining a total usage of the resources by both the high-availability jobs and the low-availability jobs to the quantity of the resources available during normal operations. | 2015-08-06 |
20150220364 | System and method of providing a self-optimizing reservation in space of compute resources - A system and method of dynamically controlling a reservation of compute resources within a compute environment is disclosed. The method aspect of the invention comprises receiving a request from a requestor for a reservation of resources within the compute environment, reserving a first group of resources, evaluating resources within the compute environment to determine if a more efficient use of the compute environment is available and if a more efficient use of the compute environment is available, then canceling the reservation for the first group of resources and reserving a second group of resources of the compute environment according to the evaluation. | 2015-08-06 |
20150220365 | REMEDIATING GAPS BETWEEN USAGE ALLOCATION OF HARDWARE RESOURCE AND CAPACITY ALLOCATION OF HARDWARE RESOURCE - A usage allocation of a hardware resource to each of a number of workloads over time is determined using a demand model. The usage allocation of the resource includes a current and past actual usage allocation of the resource, a future projected usage allocation of the resource, and current and past actual usage of the resource. A capacity allocation of the resource is determined using a capacity model. The capacity allocation of the resource includes a current and past capacity and a future projected capacity of the resource. Whether a gap exists between the usage allocation and the capacity allocation is determined using a mapping model. Where the gap exists between the usage allocation of the resource and the capacity allocation of the resource, a user is presented with options determined using the mapping model and selectable by the user to implement a remediation strategy to close the gap. | 2015-08-06 |
20150220366 | TECHNIQUES FOR MAPPING LOGICAL THREADS TO PHYSICAL THREADS IN A SIMULTANEOUS MULTITHREADING DATA PROCESSING SYSTEM - A technique for mapping logical threads to physical threads of a simultaneous multithreading (SMT) data processing system includes mapping one or more logical threads to one or more physical threads based on a selected SMT mode for a processor. In this case, respective resources for each of the one or more physical threads are predefined based on the SMT mode and an identifier of the one or more physical threads. The one or more physical threads are then executed on the processor utilizing the respective resources. | 2015-08-06 |
20150220367 | REMOVAL OF IDLE TIME IN VIRTUAL MACHINE OPERATION - A computer system for providing virtualization services may execute computer programs by a virtual processor in a virtual machine. The computer programs may be executed as tasks scheduled for execution at respective points in an apparent time tracked by an apparent-time reference. During execution of the computer programs, the computer system may detect a current point in apparent time at which all tasks scheduled for repeated execution at a given frequency have been executed, or at which the virtual processor is idle. And in response, the computer system may advance the apparent time to a subsequent point with a frequency greater than that with which the apparent time is tracked by the apparent-time reference. | 2015-08-06 |
20150220368 | DATA AND STATE THREADING FOR VIRTUALIZED PARTITION MANAGEMENT - The system includes a virtualized environment having at least one partition. An instance of an application executes in the partition. The application instance is not dedicated to a single user or element. Rather, the application instance may be shared or parsed out to two or more users or elements. To accomplish this sharing, the static data (which is common to all the elements or users) may be maintained in the partition or is loaded at runtime. The dynamic data (the data which is unique to each instantiation and associated with the element requesting the application) can be loaded when an instance is dedicated to execute for a particular element or user. Thus, various elements can share instances of an application and there need not be instances dedicated to particular elements. | 2015-08-06 |
20150220369 | DISTRIBUTED PROCEDURE EXECUTION IN MULTI-CORE PROCESSORS - Technologies are generally described for methods and systems effective to execute a program in a multi-core processor. In an example, methods to execute a program in a multi-core processor may include executing a first procedure on a first core of a multi-core processor. The methods may further include while executing the first procedure, sending a first and second instruction, from the first core to a second and third core, respectively. The instructions may command the cores to execute second and third procedures. The methods may further include executing the first procedure on the first core while executing the second procedure on the second core and executing the third procedure on the third core. | 2015-08-06 |
20150220370 | JOB SCHEDULING APPARATUS AND METHOD THEREFOR - A plurality of compute nodes are divided into a plurality of groups. A maximum available resource amount determining unit determines, for each of the plurality of groups, the available resource amount of the compute node having the greatest available resource amount among the compute nodes belonging to the group as the maximum available resource amount of the group. An excluding unit compares the resource consumption of a job with the maximum available resource amount of each of the plurality of groups, and excludes a group whose maximum available resource amount is less than the resource consumption from search objects. A searching unit searches for a compute node whose available resource amount is greater than or equal to the resource consumption, from the compute nodes belonging to a group that is not excluded from the search objects. | 2015-08-06 |
20150220371 | ENERGY AWARE INFORMATION PROCESSING FRAMEWORK FOR COMPUTATION AND COMMUNICATION DEVICES COUPLED TO A CLOUD - An energy aware framework for computation and communication devices (CCDs) is disclosed. CCDs may support applications, which may participate in energy aware optimization. Such applications may be designed to support execution modes, which may be associated with different computation and communication demands or requirements. An optimization block may collect computation requirement values (CRV | 2015-08-06 |
20150220372 | TECHNIQUES FOR CONTROLLING USE OF LOCKS - Various embodiments are generally directed to techniques for controlling the use of locks that regulate access to shared resources by concurrently executed portions of code. An apparatus to control locking of a resource includes a processor component, a history analyzer for execution by the processor component to analyze at least one result of a replacement of a lock instruction of a first instance of code with a lock marker to allow the processor component to speculatively execute a second instance of code, and a locking component for execution by the processor component to replace the lock instruction with the lock marker based on analysis of the at least one result, the first and second instances of code to access a resource and the lock instruction to request a lock of access to the resource to the first instance of code. Other embodiments are described and claimed. | 2015-08-06 |
20150220373 | Identifying and Modifying Hanging Escalation Tasks to Avoid Hang Conditions - In a method for processing work items that have not been completed by a first escalation, a computer determines that the first escalation failed to complete execution, processed fewer work items than the first escalation is configured to process, or completed execution beyond an allotted processing time. The computer duplicates the first escalation to form a second escalation. In addition, the computer configures the second escalation to process the work items that have not been completed by the first escalation. Furthermore, the computer disables the first escalation and activates the second escalation to process the work items that have not been completed by the first escalation. | 2015-08-06 |
20150220374 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An information processing apparatus comprises: a first platform having a first application programming interface; a wrapper absorbing difference between the first application programming interface and a second application programming interface on a second platform having the second application programming interface; and an intermediate task capable of calling a system call of the first platform and a system call of the wrapper, wherein when the intermediate task makes communication with a first task that is a task generated on the first platform, the intermediate task uses the system call of the first platform, whereas when the intermediate task makes communication with a second task that is a task generated on the wrapper, the intermediate task uses the system call of the wrapper. | 2015-08-06 |
20150220375 | SYSTEMS AND METHODS OF INTERFACE DESCRIPTION LANGUAGE (IDL) COMPILERS - An IDL compiler generates a descriptor for invoking a method implemented by a software component or a target unit by source units, where the descriptor customizes the invocation by one or more source units based on, at least in part, whether a respective source unit and the target unit are mapped to the same core or to different cores, as specified by a unit-core map. Additionally or in the alternative, the invocation may depend on whether the method is synchronous, asynchronous, or unspecified. Using the unit-core map, a channel associated with a method may be monitored efficiently by avoiding monitoring of the source units that are mapped to the same core as the target unit is. | 2015-08-06 |
20150220376 | SYSTEM AND METHOD FOR INVESTIGATING ANOMALIES IN API PROCESSING SYSTEMS - A method is provided for detecting irregularities of one or more application programmer interface (API) entities. The method includes receiving a request for data of one or more API entities. The method further includes monitoring said data from at least one server and detecting irregularities in the data. The method also includes displaying information pertaining to the irregularities to a user. | 2015-08-06 |
20150220377 | IDENTIFYING NETWORK PERFORMANCE ALERT CONDITIONS - A method includes receiving diagnostic data at a computing system from network interface devices. The method includes analyzing the diagnostic data with the computer system to identify a performance alert condition. The method includes determining, by the computer system, potential causes of the performance alert condition. The method includes determining, by the computer system, probabilities associated with the potential causes being actual causes of the performance alert condition. The method also includes generating, by the computer system, an output including a potential causes list ordered according to the probabilities associated with the potential causes being the actual causes of the performance alert condition. | 2015-08-06 |
20150220378 | SAFETY COMPUTING DEVICE, SAFETY INPUT DEVICE, SAFETY OUTPUT DEVICE, AND SAFETY CONTROLLER - A safety computing device includes a processor and a memory. The memory includes a first memory area and a second memory area having an address different from the first memory area. The processor includes an execution control unit performing a first process including the program process on input data written in the first memory area, and a second process including the program process on input data written in the second memory area and addition of redundancy code to output data written in the second memory area, a result collating unit collating output data to which redundancy code is added in the first and second processes, a computation diagnosis unit diagnosing presence or absence of failure in the processor and the memory, and an abnormality processing unit that, when an abnormality is detected by at least one of redundancy check, collation, and diagnosis, stops outputting output data. | 2015-08-06 |
20150220379 | DYNAMICALLY DETERMINING AN EXTERNAL SYSTEMS MANAGEMENT APPLICATION TO REPORT SYSTEM ERRORS - Systems, methods, and computer program products to perform an operation comprising, responsive to an occurrence of an error on a computing system, selecting, based on one or more policy attributes, a first systems management application from a plurality of systems management applications registered to manage the computing system, generating an event notification including an identifier for the first systems management application, and transmitting the event notification to the first systems management application for reporting to a remote service. | 2015-08-06 |
20150220380 | DYNAMICALLY DETERMINING AN EXTERNAL SYSTEMS MANAGEMENT APPLICATION TO REPORT SYSTEM ERRORS - Systems, methods, and computer program products to perform an operation comprising, responsive to an occurrence of an error on a computing system, selecting, based on one or more policy attributes, a first systems management application from a plurality of systems management applications registered to manage the computing system, generating an event notification including an identifier for the first systems management application, and transmitting the event notification to the first systems management application for reporting to a remote service. | 2015-08-06 |
20150220381 | Out-Of-Band Monitoring and Managing of Self-Service Terminals (SSTs) - A portable memory device is interfaced to a SST and authenticated; a system application on the SST writes diagnostic data to the device. The portable memory device is subsequently interfaced to an enterprise system and the diagnostic data is pulled to the enterprise system for analysis. In an embodiment, the enterprise system pushes informational data regarding maintenance and support to the portable device when the portable device is subsequently interfaced to the SST; the informational data is pushed to the SST for presentation and viewing by a service engineer. | 2015-08-06 |
20150220382 | SYSTEM AND METHOD FOR SUBSCRIBING FOR INTERNET PROTOCOL MULTIMEDIA SUBSYSTEMS (IMS) SERVICES REGISTRATION STATUS - A system and method that allows mobile device applications to receive changes in registration status from application services that are accessed via an Internet Protocol Multimedia Subsystem (IMS). Applications on a mobile device subscribe to receive notifications of changes in registration status for requested services. When a change to the registration status of a service occurs, a notification message is transmitted to the application on the mobile device. Notifications of changes in status are thereby received by each application on a per-application-service basis. In some embodiments, when a request to register with an application service fails, the corresponding notification message includes a reason for the failure. In some embodiments, notification messages are originated by a registration manager that operates in the IMS and transmitted to an IMS client operating on a mobile device. In some embodiments, notification messages are originated by each application service and transmitted directly to subscribed applications. | 2015-08-06 |
20150220383 | RECEPTION CIRCUIT OF IMAGE DATA, ELECTRONIC DEVICE USING THE SAME, AND METHOD OF TRANSMITTING IMAGE DATA - A reception circuit for receiving serial data including first pixel data constituting image data from a transmission circuit, wherein the serial data has a format allowing the reception circuit to detect a transmission error, the reception circuit including a serial-to-parallel converter configured to receive the serial data and convert the received serial data into first parallel data, an error detector configured to determine whether the first parallel data is correct or erroneous based on the first parallel data, a correcting buffer configured to maintain the first pixel data included in the first parallel data if the first parallel data is determined to be correct by the error detector, and a correcting unit configured to substitute the first pixel data included in the first parallel data determined to be erroneous by the error detector with a value corresponding to second pixel data stored in the correcting buffer. | 2015-08-06 |
20150220384 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND FAILURE DETECTION METHOD - An information processing apparatus includes a first transfer unit that receives data and transfers the data to a second transfer unit per a predetermined unit of transfer; and the second transfer unit that sends the data that is received from the first transfer unit, wherein the first transfer unit includes: a first calculator that calculates first error information on the whole received data on the basis of the received data; a second calculator that calculates, on the basis of the data that is sent by the second transfer unit, second error information on the whole sent data; and an error detector that compares the first error information and the second error information to detect an error. | 2015-08-06 |
20150220385 | NON-BLOCKING STORAGE SCHEME - Techniques are disclosed relating to writing data across multiple storage blocks in a storage device. In one embodiment, physical erase blocks in a bank of a storage device are erasable. Ones of the physical erase blocks may be associated with different respective communication channels. In such an embodiment, a data stripe may be written across a set of physical erase blocks such that the set of physical erase blocks includes physical erase blocks of different banks and includes physical erase blocks associated with different communication channels. In some embodiments, a request to read a portion of the data stripe may be received. In response to the request, a determination may be made that one of the set of physical erase blocks is unavailable to service the request. The request may then be serviced by reassembling data of the unavailable physical erase block. | 2015-08-06 |
20150220386 | DATA INTEGRITY IN MEMORY CONTROLLERS AND METHODS - The present disclosure includes methods, devices, and systems for data integrity in memory controllers. One memory controller embodiment includes a host interface and first error detection circuitry coupled to the host interface. The memory controller can include a memory interface and second error detection circuitry coupled to the memory interface. The first error detection circuitry can be configured to calculate error detection data for data received from the host interface and to check the integrity of data transmitted to the host interface. The second error detection circuitry can be configured to calculate error correction data for data and first error correction data transmitted to the memory interface and to check integrity of data and first error correction data received from the memory interface. | 2015-08-06 |
20150220387 | ERROR CORRECTION IN NON_VOLATILE MEMORY - Apparatus, systems, and methods for error correction in memory are described. In one embodiment, a memory controller comprises logic to receive a read request for data stored in a memory, retrieve the data and at least one associated error correction codeword, wherein the data and an associated error correction codeword is distributed across a plurality of memory devices in memory, apply a first error correction routine to decode the error correction codeword retrieved with the data and in response to an uncorrectable error in the error correction codeword, apply a second error correction routine to the plurality of devices in memory. Other embodiments are also disclosed and claimed. | 2015-08-06 |
20150220388 | Systems and Methods for Hard Error Reduction in a Solid State Memory Device - Systems and method relating generally to solid state memory, and more particularly to systems and methods for reducing errors in a solid state memory. | 2015-08-06 |
20150220389 | MEMORY CONTROLLERS AND FLASH MEMORY READING METHODS - A method of reading multi-bit data stored in a memory cell of a flash memory includes attempting to perform hard decision (HD) decoding on output data from the flash memory, and performing soft decision (SD) decoding on the output data when the HD decoding cannot be performed. The performing of the SD decoding includes: changing a maximum number of iterations according to a threshold voltage distribution of the memory cell; and performing the SD decoding based on the changed maximum number of iterations. | 2015-08-06 |
20150220390 | PROGRAMMING METHOD, READING METHOD AND OPERATING SYSTEM FOR MEMORY - A programming method, a reading method and an operating system for a memory are provided. The programming method includes the following steps. A data is provided. A parity generation is performed to obtain an error-correcting code (ECC). The memory is programmed to record the data and the error-correcting code. The data is transformed before performing the parity generation, such that a hamming distance between two codes corresponding to two adjacent threshold voltage states in the data to be performed the parity generation is 1. | 2015-08-06 |
20150220391 | IDENTIFICATION AND MITIGATION OF HARD ERRORS IN MEMORY SYSTEMS - Embodiments provide a method comprising estimating a first set of log-likelihood ratio (LLR) values for a plurality of memory cells of a memory; based on the first set of LLR values, performing a first error correcting code (ECC) decoding operation; in response to determining a failure of the first ECC decoding operation, generating, by adjusting the first set of LLR values, a second set of LLR values for the plurality of memory cells; and based on the second set of LLR values, performing a second ECC decoding operation. | 2015-08-06 |
20150220392 | STORAGE SYSTEMS WITH ADAPTIVE ERASURE CODE GENERATION - Apparatuses, methods and storage medium associated with generating erasure codes for data to be stored in a storage system. In embodiments, a method may include launching, by storage system, a plurality of instances of an erasure code generation module, based at least in part on hardware configuration of the storage system. Additionally, the method may further include setting, by the storage system, operational parameters of the plurality of instances of the erasure code generation module, based at least in part on current system load of the storage system. Further, the method may include operating, by the storage system, the plurality of instances of the erasure code generation module to generate erasure codes for data to be stored in the storage system, in accordance with the operational parameters set. Other embodiments may be described and claimed. | 2015-08-06 |
20150220393 | METHOD AND APPARATUS FOR STORING TRACE DATA - A method and apparatus for storing trace data within a processing system. The method comprises configuring at least one Error Correction Code, ECC, component within the processing system to operate in a trace data storage operating mode, generating trace data at a debug module of the processing system, and conveying the trace data from the debug module to the at least one ECC component for storing in an area of memory used for ECC information. | 2015-08-06 |
20150220394 | MEMORY SYSTEM AND METHOD OF CONTROLLING MEMORY SYSTEM - According to an embodiment, a controller performs a process corresponding to processing of a second command when the second command is received from a host device before a first time passes after an end of processing corresponding to a first command received from the host device, and executes patrol read to read data stored in a nonvolatile memory continuously in predetermined units when no command is received from the host device before the first time passes. | 2015-08-06 |
20150220395 | ERROR CONTROL IN MEMORY STORAGE SYSTEMS - A method includes calculating a first syndrome of a codeword read from a memory location under a first set of conditions and calculating a second syndrome of the codeword read from the memory location under a second set of conditions. The method also includes analyzing the first and second syndromes and applying one of the first and second syndromes to the codeword to find the codeword having a minimum number of errors. | 2015-08-06 |
20150220396 | WRITING ENCODED DATA SLICES IN A DISPERSED STORAGE NETWORK - A method for writing a set of encoded data slices to memory of a dispersed storage network (DSN) begins by a processing module identifying an encoded data slice of the set of encoded data slices for a redundant write operation to produce an identified encoded data slice. The method continues with the processing module generating a set of first write requests regarding the set of encoded data slices less the identified encoded data slice and generating a set of second write requests regarding the identified encoded data slice. The method continues with the processing module sending the set of first write requests to storage units of the DSN and sending the set of second write requests to a set of storage units of the DSN, where each storage unit of the set of storage units is sent a corresponding one of the set of second write requests. | 2015-08-06 |
20150220397 | STORAGE CONTROLLER, STORAGE SYSTEM AND STORAGE CONTROL METHOD - A storage controller, when writing n sets of data into a first storage device, adds dummy data to other sets of data except for a first set of data having a largest size among the n sets of data such that sizes of other sets of data become equal to the size of the first set of data, calculates (n−1) parities based on the first set of data and the other sets of data, and when reading the n sets of data from the first storage device, concurrently performs a processing of reading a second set of data having a smallest size among the n sets of data from the first storage device and a processing of restoring each of two or more sets of data in the n sets of data except for the second set of data, by using the (n−1) parities and the dummy data. | 2015-08-06 |
20150220398 | Prioritizing Data Reconstruction in Distributed Storage Systems - A method of prioritizing data for recovery in a distributed storage system includes, for each stripe of a file having chunks, determining whether the stripe comprises high-availability chunks or low-availability chunks and determining an effective redundancy value for each stripe. The effective redundancy value is based on the chunks and any system domains associated with the corresponding stripe. The distributed storage system has a system hierarchy including system domains. Chunks of a stripe associated with a system domain in an active state are accessible, whereas chunks of a stripe associated with a system domain in an inactive state are inaccessible. The method also includes reconstructing substantially immediately inaccessible, high-availability chunks having an effective redundancy value less than a threshold effective redundancy value and reconstructing the inaccessible low-availability and other inaccessible high-availability chunks, after a threshold period of time. | 2015-08-06 |
20150220399 | METHOD AND SYSTEM FOR FACILITATING ONE-TO-MANY DATA TRANSMISSIONS WITH REDUCED NETWORK OVERHEAD - A method and system for facilitating one-to-many data transmissions with reduced network overhead includes conducting a round of data transmissions from a source computing device to a plurality of sink computing devices. Each of the sink computing devices generates a bucket list of lost data blocks for the round of data transmissions and transmits the bucket list to the source computing device. The source computing device conducts a subsequent round of data transmissions based on the bucket lists. One or more additional subsequent rounds may be conducted until the bucket list of each sink computing device is empty. | 2015-08-06 |
20150220400 | RECOVERING DATA FROM MICROSLICES IN A DISPERSED STORAGE NETWORK - A method begins by a processing module of a dispersed storage network (DSN) identifying a data segment to be retrieved from storage units of the DSN, where the data segment is encoded into a set of encoded data slices that is divided into block sets of encoded data slices, and where each storage unit stores a block set of encoded data slices. The method continues with the processing module generating a set of read requests in accordance with retrieval information which assures that at least a decode threshold number of encoded data slices of the set are retrievable, where each request includes identity of a block set a number of encoded data slices that are to be read from a storage unit. The method continues with the processing module sending the set of read requests to the storage units and decoding received encoded data slices to recover the data segment. | 2015-08-06 |
20150220401 | NEW APPROACH FOR CONTROLLER AREA NETWORK BUS OFF HANDLING - A system and method for determining when to reset a controller in response to a bus off state. The method includes determining that the controller has entered a first bus off state and immediately resetting the controller. The method further includes setting a reset timer in response to the controller being reset, determining whether the controller has entered a subsequent bus off state, and determining whether a reset time. The method immediately resets the controller in response to the subsequent bus off state if the reset time is greater than the first predetermined time interval, and resets the controller in response to the subsequent bus off state after a second predetermined time interval has elapsed if the reset time is less than the first predetermined time interval. | 2015-08-06 |
20150220402 | INCREMENTAL BLOCK LEVEL BACKUP - Disclosed are systems, computer-readable mediums, and methods for incremental block level backup. An initial backup of a volume is created at a backup server, where creating the initial backup includes retrieving an original metadata file from a metadata server, and retrieving a copy of all data of the volume based on the original metadata file. A first incremental backup of the volume is then created at the backup server, where creating the first incremental backup includes retrieving a first metadata file, where the first metadata file was created separately from the original metadata file. A block identifier of the first metadata file is compared to a corresponding block identifier of the original metadata file to determine a difference between the first and original block identifiers, and a copy of a changed data block of the volume is retrieved based on the comparison of the first and original block identifiers. | 2015-08-06 |
20150220403 | BACKUP OF BASELINE INSTALLATION - A method of backing up a computing device comprises storing in the computing device, prior to any first backup of the computing device, a selected pre-populated Reference File that comprises one or more references to at least some of the data blocks stored in the computing device. A first backup may then be initiated. The first back may cause references to data blocks in the computing device that are unrepresented in the pre-populated Reference File to be added to the Reference File. The data blocks corresponding to the added references may then be sent to a backup server over a computer network. | 2015-08-06 |
20150220404 | UNDO CONFIGURATION TRANSACTIONAL COMPENSATION - A method of for system management, comprising initiating a workflow operating on a processor. Initiating a sub-workflow operating on the processor from the workflow. Electronically reading state data for one or more resources designated by the sub-workflow prior to performing a first logical process of the sub-workflow. Storing the state data in a non-transient data memory. Performing logical processes associated with the sub-workflow using the processor. Restoring the state data for the one or more resources if it is determined that an error has occurred. | 2015-08-06 |
20150220405 | In-memory continuous data protection - An in-memory application has a state is associated with data (C | 2015-08-06 |
20150220406 | VIRTUAL MACHINE-GUEST DRIVEN STATE RESTORING BY HYPERVISOR - An example method of saving and restoring a state of one or more registers for a guest includes detecting exit of a virtual machine mode of a guest running on a virtual machine. A set of registers is accessible by the guest and includes a first subset of registers and a second subset of registers. The method also includes identifying the first subset of registers. The first subset of registers includes one or more registers to be overwritten by the guest upon re-entry of the virtual machine mode. The second subset of registers is mutually exclusive from the first subset of registers. The method further includes after detecting exit of the virtual machine mode of the guest, detecting re-entry of the virtual machine mode of the guest. The method also includes restoring a saved state of the second subset of registers for the guest. | 2015-08-06 |
20150220407 | METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION - A cross-host multi-hypervisor system, including a plurality of host sites, each site including at least one hypervisor, each of which includes at least one virtual server, at least one virtual disk read from and written to by the at least one virtual server, a tapping driver in communication with the at least one virtual server, which intercepts write requests made by any one of the at least one virtual server to any one of the at least one virtual disk, and a virtual data services appliance, in communication with the tapping driver, which receives the intercepted write requests from the tapping driver, and which provides data services based thereon, and a data services manager for coordinating the virtual data services appliances at the site, and a network for communicatively coupling the plurality of sites, wherein the data services managers coordinate data transfer across the plurality of sites via the network. | 2015-08-06 |
20150220408 | AUTOMATED FAILURE RECOVERY OF SUBSYSTEMS IN A MANAGEMENT SYSTEM - Systems and methods for automated failure recovery of subsystems of a management system are described. The subsystems are built and modeled as services, and their management, specifically their failure recovery, is done in a manner similar to that of services and resources managed by the management system. The management system consists of a microkernel, service managers, and management services. Each service, whether a managed service or a management service, is managed by a service manager. The service manager itself is a service and so is in turn managed by the microkernel. Both managed services and management services are monitored via in-band and out-of-band mechanisms, and the performance metrics and alerts are transported through an event system to the appropriate service manager. If a service fails, the service manager takes policy-based remedial steps including, for example, restarting the failed service. | 2015-08-06 |
20150220409 | Per-Function Downstream Port Containment - Per-Function Downstream Port Containment (pF-DPC) is an extension to Downstream Port Containment (DPC) in the Peripheral Component Interconnect express (PCIe) standard. Pf-DPC confines non-fatal errors to specific functions of an end-point device without disabling the link between a PCIe port and the end-point device. PCIe ports configured for pF-DPC may filter (e.g., drop) packets carrying routing identifiers (RIDs) and/or addresses assigned to a function affected by a non-fatal error, while continuing to forward packets carrying RIDs/addresses associated with remaining operable functions over the corresponding link. | 2015-08-06 |
20150220410 | MECHANISM FOR ACHIEVING HIGH MEMORY RELIABLITY, AVAILABILITY AND SERVICEABILITY - A mechanism is described for achieving high memory reliability, availability, and serviceability (RAS) according to one embodiment of the invention. A method of embodiments of the invention includes detecting a permanent failure of a first memory device of a plurality of memory devices of a first channel of a memory system at a computing system, and eliminating the first failure by merging a first error-correction code (ECC) locator device of the first channel with a second ECC locator device of a second channel, wherein merging is performed at the second channel. | 2015-08-06 |