03rd week of 2016 patent applcation highlights part 42 |
Patent application number | Title | Published |
20160019069 | Cloud Firmware - A network element (NE) comprising a receiver configured to couple to a cloud network; and a multi-core central processing unit (CPU) coupled to the receiver and configured to receive a first partition configuration from an orchestration element, partition a plurality of processor cores into a plurality of processor core partitions according to the first partition configuration, and initiate a plurality of virtual basic input/output systems (vBIOSs) such that each vBIOS manages a processor core partition. | 2016-01-21 |
20160019070 | METHOD FOR CONFIGURING STORAGE SYSTEM CONNECTION, DEVICE AND STORAGE SYSTEM - A method for configuring a connection in a storage system is provided. A configuring device determines that the configuring device cannot communicate with a first control board, and identifies route information related to the first control board in a route information table. The route information is route information between an adapter card and the first control board. The configuring device modifies the identified route information by changing an address of the first control board in the route information to an address of a second control board. | 2016-01-21 |
20160019071 | ACCURATE STATIC DEPENDENCY ANALYSIS VIA EXECUTION-CONTEXT TYPE PREDICTION - Exemplary embodiments provide methods, mediums, and systems for generating a runtime environment that is customized to a particular computer program, particularly in terms of the function definitions that support function calls made in the computer program. The customized runtime environment may therefore be smaller in size than a conventional runtime environment. To create such a customized runtime environment, an analyzer may be provided which monitors test executions of the computer program and/or performs a structural analysis of the source code of the computer program. The analyzer may determine a list of probabilistically or deterministically required function definitions, and provide the list to a component reducer. The component reducer may eliminate any function definitions not deemed to be required from a runtime environment, thereby producing a customized runtime environment that is built to support a particular computer program. | 2016-01-21 |
20160019072 | DYNAMIC DETERMINATION OF APPLICATION SERVER RUNTIME CLASSLOADING - Embodiments of the present invention provide a method, system and computer program product for dynamic selection of a runtime classloader for a generated class file. In an embodiment of the invention, a method for dynamic selection of a runtime classloader for a generated class file is provided. The method includes extracting meta-data from a program object directed for execution in an application server and determining from the meta-data a container identity for a container in which the program object had been compiled. The method also includes selecting a container according to the meta-data. Finally, the method includes classloading the program object in the selected container. | 2016-01-21 |
20160019073 | Dynamically Loaded Plugin Architecture - A method and architecture for using dynamically loaded plugins is described herein. The dynamically loaded plugin architecture comprises a parent context and a plugin repository. The parent context may define one or more reusable software components. The plugin repository may store one or more plugins. When a plugin is loaded, a child context may be created dynamically. The child context is associated with the plugin and inherits the one or more reusable software components from the parent context. | 2016-01-21 |
20160019074 | DISTRIBUTED CLOUD COMPUTING ELASTICITY - A method comprising, in a cloud computing system: receiving a new job at the cloud computing system; sampling VMs (Virtual Machines) of the cloud computing system for the load currently handled by each of the VMs; if the load currently handled by the VMs is within operational bounds, sending the new job to one of the VMs which currently handles the highest load compared to other ones of the VMs; and if the load currently handled by the VMs is beyond operational bounds, sending the new job to one of the VMs which currently handles the lowest load compared to other ones of the VMs. | 2016-01-21 |
20160019075 | VIRTUAL MACHINE SUSPENSION IN CHECKPOINT SYSTEM - Performing a checkpoint includes determining a checkpoint boundary of the checkpoint for a virtual machine, wherein the virtual machine has a first virtual processor, determining a scheduled hypervisor interrupt for the first virtual processor, and adjusting, by operation of one or more computer processors, the scheduled hypervisor interrupt to before or substantially at the checkpoint boundary. | 2016-01-21 |
20160019076 | PROVENANCE IN CLOUD COMPUTING SYSTEMS - A method comprises pairing a virtual machine instance with a virtual agent that is registered with registry in an execution environment. In this regard, upon instantiating the virtual machine and the corresponding virtual agent, the virtual agent monitors for transaction(s), e.g., a specific invoked method, on that execution environment. The virtual agent is also configured for generating an event in response to detecting the transaction. The virtual agent provides a unique signature associated with the event, which identifies the origin of the virtual machine instance. Still further, the virtual agent is configured for forwarding the event to the registry for collating with other events so as to produce composite end-to-end logs of processes in a manner that enables provenance. | 2016-01-21 |
20160019077 | IMPORTING A RUNNING VM - A virtualization manager executing on a processing device adds a host to a list of hosts associated with the virtualization manager. The virtualization manager identifies a list of external VMs running on the host that are not managed by the virtualization manager. The virtualization manager obtains detailed information for each of the external VMs running on the host from an agent running on the host. The virtualization manager then manages the external VMs running on the host using the detailed information. | 2016-01-21 |
20160019078 | IMPLEMENTING DYNAMIC ADJUSTMENT OF I/O BANDWIDTH FOR VIRTUAL MACHINES USING A SINGLE ROOT I/O VIRTUALIZATION (SRIOV) ADAPTER - A method, system and computer program product are provided for implementing dynamic adjustment of Input/Output bandwidth for Virtual Machines of a Single Root Input/Output Virtualization (SRIOV) adapter. The SRIOV adapter includes a plurality of virtual functions (VFs). Each individual virtual function (VF) is enabled to be explicitly assigned to a Virtual Machine (VM); and each of a plurality of VF teams is created with one or more VFs and is assigned to a VM. Each VF team is enabled to be dynamically resizable for dynamic adjustment of Input/Output bandwidth. | 2016-01-21 |
20160019079 | SYSTEM AND METHOD FOR INPUT/OUTPUT ACCELERATION DEVICE HAVING STORAGE VIRTUAL APPLIANCE (SVA) USING ROOT OF PCI-E ENDPOINT - Methods and systems for I/O acceleration using an I/O accelerator device on a virtualized information handling system include pre-boot configuration of first and second device endpoints that appear as independent devices. After loading a storage virtual appliance that has exclusive access to the second device endpoint, a hypervisor may detect and load drivers for the first device endpoint. The storage virtual appliance may then initiate data transfer I/O operations using the I/O accelerator device. The data transfer operations may be read or write operations to a storage device that the storage virtual appliance provides access to. The I/O accelerator device may use direct memory access (DMA). | 2016-01-21 |
20160019080 | ALLOCATING STORAGE FOR VIRTUAL MACHINE INSTANCES BASED ON INPUT/OUTUPT (I/O) USAGE RAGE OF THE DISK EXTENTS STORED IN AN I/O PROFILE OF A PREVIOUS INCARNATION OF THE VIRTUAL MACHINE - A method, system and computer program product for allocating storage for virtual machine instances. The input/output (I/O) usage of disk extents utilized by a virtual machine is saved in an I/O profile of the virtual machine. In response to deallocating the virtual machine, the I/O usage of the disk extents is extracted from its I/O profile and saved in a data structure. Upon starting a new instance of the virtual machine, new disk extents are allocated to the new virtual machine instance. The I/O usage of the disk extents for the previous incarnation of the virtual machine is applied to the disk extents allocated to the new virtual machine instance. The newly allocated disk extents can now be placed in either a solid-state drive device or a hard disk drive device based on this I/O history without requiring a twenty-four hour long cycle. | 2016-01-21 |
20160019081 | VIEWING A SNAPSHOT OF A VIRTUAL MACHINE - In a computer-implemented method for viewing a snapshot of a virtual machine, during operation of a virtual machine in a first console, at least one snapshot of the virtual machine is presented for selection, wherein the snapshot includes a previous state of the virtual machine. Responsive to a selection of the snapshot, a second virtual machine of the selected snapshot is deployed in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console. | 2016-01-21 |
20160019082 | COMPARING STATES OF A VIRTUAL MACHINE - In a computer-implemented method for comparing states of a virtual machine, a plurality of selectable states including a current state of a virtual machine and at least one snapshot of the virtual machine are presented for selection, wherein the at least one snapshot includes a state of the virtual machine at a previous state. Responsive to a selection of at least two states of the plurality of selectable states, a comparison tool for comparing information between the at least two states of the virtual machine is presented. | 2016-01-21 |
20160019083 | MODIFYING A STATE OF A VIRTUAL MACHINE - In a computer-implemented method for modifying a state of a virtual machine, information between two states of a virtual machine is compared, wherein the two states include a current state of the virtual machine and previous state of the virtual machine. The previous state of the virtual machine is included within a snapshot of the virtual machine at the previous state. Information that is different between the two states is identified. The information that is different between the two states is presented, wherein the information that is different is selectable for copying between the two states. | 2016-01-21 |
20160019084 | METHOD AND SYSTEM FOR INTER-CLOUD VIRTUAL MACHINES ASSIGNMENT - A method is disclosed for providing for a high-level local manager in each data center of a group of data centers. The high-level local manager is configured to allocate a new virtual machine or re-allocate an already running virtual machine. The high-level local managers exchange information with each other and run the same programs or processes, so that each local manager knows where the new virtual machine is to be assigned. Once determined which data center will execute the virtual machine, the method provides for a low-level local manager to assign the virtual machine to one of the servers of the data center according to a local algorithm. | 2016-01-21 |
20160019085 | PROVISIONING OF COMPUTER SYSTEMS USING VIRTUAL MACHINES - A provisioning server automatically configures a virtual machine (VM) according to user specifications and then deploys the VM on a physical host. The user may either choose from a list of pre-configured, ready-to-deploy VMs, or he may select which hardware, operating system and application(s) he would like the VM to have. The provisioning server then configures the VM accordingly, if the desired configuration is available, or it applies heuristics to configure a VM that best matches the user's request if it isn't. The invention also includes mechanisms for monitoring the status of VMs and hosts, for migrating VMs between hosts, and for creating a network of VMs. | 2016-01-21 |
20160019086 | APPARAUTS AND METHOD FOR GENERATING SOFTWARE DEFINED NETWORK(SDN)-BASED VIRTUAL NETWORK ACCORDING TO USER DEMAND - An apparatus and method for generating a Software Defined Network (SDN)-based virtual network. The apparatus includes a network information generator and a virtual network generator, in which an SDN-based virtual network desired by a user may be generated efficiently by allocating physical resources to reflect various user demands. | 2016-01-21 |
20160019087 | METHODS AND SYSTEMS FOR PROVIDING A CUSTOMIZED NETWORK - A method, system, and computer-readable medium for providing a secure computer network for the real time transfer of data are provided. The data is grouped and stored as per user preferences. The data being transmitted is encrypted, decrypted, and validated by the system (assuming user identifications/passwords are verified). | 2016-01-21 |
20160019088 | MOBILITY OPERATION RESOURCE ALLOCATION - According to one aspect of the present disclosure, a method and technique for mobility operation resource allocation is disclosed. The method includes: receiving a request to migrate a running application from a first machine to a second machine; displaying an adjustable resource allocation mobility setting interface indicating a plurality of mobility settings comprising at least one performance-based mobility setting and at least one concurrency-based mobility setting; receiving, via the interface, a selection of a mobility setting defining a resource allocation to utilize for the migration; and migrating the running application from the first machine to the second machine utilizing resources as set by the selected mobility setting. | 2016-01-21 |
20160019089 | METHOD AND SYSTEM FOR SCHEDULING COMPUTING - Provided is a method and system for scheduling computing so as to meet the quality of service (QoS) expected in a system by identifying the operation characteristic of an application in real time and enabling all nodes in the system to dynamically change the schedulers thereof organically between each other. The scheduling method includes: detecting an event of requesting a scheduler change; selecting a scheduler corresponding to the event among schedulers; and changing a scheduler of a node, which schedules use of the control unit, to the selected scheduler, without rebooting the node. | 2016-01-21 |
20160019090 | DATA PROCESSING CONTROL METHOD, COMPUTER-READABLE RECORDING MEDIUM, AND DATA PROCESSING CONTROL DEVICE - A data processing control device performs a MapReduce process. When the data processing control device assigns input data to first Reduce tasks and a second Reduce task performed by using a result of Map processes, the data processing control device assigns input data with smaller amount than any of amounts of the input data which is assigned to the first Reduce tasks to the second Reduce task. The data processing control device assigns the first Reduce tasks and the second Reduce task, to which input data is assigned, to a server that performs Reduce processes in the MapReduce process such that the second Reduce task is started after the assignment of all of the first Reduce tasks. | 2016-01-21 |
20160019091 | AUTOMATION OF WORKFLOW CREATION AND FAILURE RECOVERY - A system includes a processor and a non-transitory computer-readable medium. The non-transitory computer-readable medium comprises instructions executable by the processor to cause the system to perform a method. The method comprises receiving a first job to execute and executing the first job. A plurality of data associated with the first job is determined The plurality of data comprises data associated with (i) a second job executed immediately prior to the first job, (ii) a third job executed immediately after the first job, (iii) a determination of whether the first job failed or executed successfully and (iv) a type of data associated with the first job. The determined plurality of data is stored. | 2016-01-21 |
20160019092 | METHOD AND APPARATUS FOR CLOSING PROGRAM, AND STORAGE MEDIUM - A method and an apparatus for closing a program, and a storage medium are provided. The method includes: opening, by a mobile terminal, a task management area of a multitasking processing queue, where a response area is provided in the task management area; detecting, by the mobile terminal, in the response area; and closing, by the mobile terminal, all programs in the multitasking processing queue in response to a specified operation of a user detected in the response area. An operation of closing a background program is simplified, and a user can easily close a background program just by performing a specified operation in a response area; therefore, the operation is simple and convenient. | 2016-01-21 |
20160019093 | SYSTEM AND METHOD TO CONTROL HEAT DISSIPATION THROUGH SERVICE LEVEL ANALYSIS - The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device. | 2016-01-21 |
20160019094 | SYSTEM AND METHOD FOR ELECTRONIC WORK PREDICTION AND DYNAMICALLY ADJUSTING SERVER RESOURCES - A computer-implemented system and method facilitate dynamically allocating server resources. The system and method include determining a current queue distribution, referencing historical information associated with execution of at least one task, and predicting, based on the current queue distribution and the historical information, a total number of tasks of various task types that are to be executed during the time period in the future. Based on this prediction, a resource manager determines a number of servers that should be instantiated for use during the time period in the future. | 2016-01-21 |
20160019095 | ASSIGNING A PORTION OF PHYSICAL COMPUTING RESOURCES TO A LOGICAL PARTITION - A data processing system includes physical computing resources that include a plurality of processors. The plurality of processors include a first processor having a first processor type and a second processor having a second processor type that is different than the first processor type. The data processing system also includes a resource manager to assign portions of the physical computing resources to be used when executing logical partitions. The resource manager is configured to assign a first portion of the physical computing resources to a logical partition, to determine characteristics of the logical partition, the characteristics including a memory footprint characteristic, to assign a second portion of the physical computing resources based on the characteristics of the logical partition, and to dispatch the logical partition to execute using the second portion of the physical computing resources. | 2016-01-21 |
20160019096 | SINGLE, LOGICAL, MULTI-TIER APPLICATION BLUEPRINT USED FOR DEPLOYMENT AND MANAGEMENT OF MULTIPLE PHYSICAL APPLICATIONS IN A CLOUD INFRASTRUCTURE - A deployment system enables a developer to define a logical, multi-tier application blueprint that can be used to create and manage (e.g., redeploy, upgrade, backup, patch) multiple applications in a cloud infrastructure. In the application blueprint, the developer models an overall application architecture, or topology, that includes individual and clustered nodes (e.g., VMs), logical templates, cloud providers, deployment environments, software services, application-specific code, properties, and dependencies between top-tier and second-tier components. The application can be deployed according to the application blueprint, which means any needed VMs are provisioned from the cloud infrastructure, and application components and software services are installed. | 2016-01-21 |
20160019097 | COMPUTER SYSTEM, MANAGEMENT COMPUTER AND MANAGEMENT METHOD - The purpose of the invention is to simplify the work of setting migration WWNs used in live migration of LPARs. Hypervisor management software of a management computer acquires and stores, in a storage unit, WWNs set for logical FC-HBAs of hypervisors of computers and host information including a WWN of a source capable of accessing a logical unit (LU) of a storage device. The hypervisor management software uses such information as a basis to output, on a display screen, information indicating whether or not a migration WWN, which is a WWN value of a logical FC-HBA used at migration of a virtual computer of the computer, is in a state of being able to be used to access the LU. | 2016-01-21 |
20160019098 | Server farm management - A cloud manager controls the deployment and management of machines for an online service. A build system creates deployment-ready virtual hard disks (VHDs) that are installed on machines that are spread across one or more networks in farms that each may include different configurations. The build system is configured to build VHDs of differing configurations that depend on a role of the virtual machine (VM) for which the VHD will be used. The build system uses the VHDs to create virtual machines (VMs) in both test and production environments for the online service. The cloud manager system automatically provisions machines with the created virtual hard disks (VHDs). Identical VHDs can be installed directly on the machines that have already been tested. | 2016-01-21 |
20160019099 | CALCULATING EXPECTED MAXIMUM CPU POWER AVAILABLE FOR USE - A method of calculating a processing power available from a supervisor of a multi-programmed computing system by a first partition of a plurality of partitions, the method comprising collecting, by the first partition, state data from the supervisor, the state data including a processing capacity of the multi-programmed computing system. The method further comprises initializing a remaining capacity variable to the processing capacity of the multi-programmed computing system; initializing variables, including setting a binary variable to a first logic value for each of the plurality of partitions; iteratively computing an entitlement and amount of power to award for each of the plurality of partitions having their respective binary variables set to the first logic value; and requesting the processing power from the supervisor, based on the iterative computation. | 2016-01-21 |
20160019100 | Method, Apparatus, and Chip for Implementing Mutually-Exclusive Operation of Multiple Threads - Multiple lock assemblies are distributed on a chip, each lock assembly manage a lock application message for applying for a lock and a lock release message for releasing a lock that are sent by one small core. Specifically, embodiments include receiving a lock message sent by a small core, where the lock message carries a memory address corresponding to a lock requested by a first thread in the small core; calculating, using the memory address of the requested lock, a code number of a lock assembly to which the requested lock belongs; and sending the lock message to the lock assembly corresponding to the code number, to request the lock assembly to process the lock message. | 2016-01-21 |
20160019101 | CONTENT GENERATION AND TRACKING APPLICATION, ENGINE, SYSTEM AND METHOD - The present invention relates to the transcription of audio, and, more particularly, to an engine, system and method of providing audio transcriptions for use in content resources. | 2016-01-21 |
20160019102 | APPLICATION PATTERN DISCOVERY - API associations among a plurality of service application programming interfaces may be identified by analyzing service API call logs, which contain data associated with invocation of the plurality of application programming interfaces by a plurality of applications, wherein sets of APIs that are determined to be called together are identified. For a set of service APIs, a plurality of applications that invoke the APIs in the set is identified. A sequence of API calls by an application in the plurality of applications is identified, wherein multiples sequences of APIs are identified, one sequence of API calls identified respectively for one application in the plurality of applications. An application pattern is determined based on the multiple sequences of service APIs. | 2016-01-21 |
20160019103 | METHOD AND APPARATUS FOR PROVIDING AN APPLICATION CONTROL TRIGGER - An approach for implementing a live application control platform for providing live application control triggers via a multicast/broadcast transmission session. The approach includes receiving an input for specifying an application event trigger. The approach also includes delivering the application event trigger over a multicast data channel, wherein the application event trigger is received by a device within a coverage area of the multicast data channel to trigger an event to be performed by one or more applications of the device. | 2016-01-21 |
20160019104 | CROSS-DOMAIN DATA SHARING WITH PERMISSION CONTROL - An electronic device may maintain separate OS domains associated with security permissions. The OS domain may implement separate corresponding clipboard services. A clipboard agent or clipboard mediator service may receive a clipboard data request from a first application. The clipboard agent may determine which OS domain has most recently processed a store command associated with storing data in a corresponding clipboard service of the OS domain. The clipboard agent associated with the OS domain that most recently stored content may determine whether to send the data from the corresponding clipboard service based at least in part on permissions associated with the OS domain. Security of the clipboard access may be enforced on a per domain basis. Access to clipboard content may be mediated at the time of the request without a need to share data prior to the request. | 2016-01-21 |
20160019105 | COMPUTER EMBEDDED APPARATUS, RECORDING MEDIUM AND COMPUTER EMBEDDED APPARATUS TEST SYSTEM - There is provided a computer embedded apparatus comprising: a process control unit configured with a plurality of software layers wherein each of the layers provides a service to an upper layer and each of the layers includes, a message processing unit configured to control a sequence of messages input to or output from the layer. | 2016-01-21 |
20160019106 | Seamless Method for Booting from a Degraded Software Raid Volume on a UEFI System - An information handling system includes a processor and a configuration detection and error handling module operable to read a first tag data file from a first storage volume, read a second tag data file from a second storage volume, and determine that the first storage volume and the second storage volume are configured as mirrored storage volumes based upon the first tag data file and the second tag data file. | 2016-01-21 |
20160019107 | MANAGING A CHECK-POINT BASED HIGH-AVAILABILITY BACKUP VIRTUAL MACHINE - A technique for failure monitoring and recovery of a first application executing on a first virtual machine includes storing machine state information during execution of the first virtual machine at predetermined checkpoints. An error message that includes an application error state at a failure point of the first application is received, by a hypervisor, from the first application. The first virtual machine is stopped in response to the error message. The hypervisor creates a second virtual machine and a second application from the stored machine state information that are copies of the first virtual machine and the first application. The second virtual machine and the second application are configured to execute from a checkpoint preceding the failure point. In response to receipt of a failure interrupt by the second application, one or more recovery processes are initiated in an attempt to avert the failure point. | 2016-01-21 |
20160019108 | DETERMINING ALERT CRITERIA IN A NETWORK ENVIRONMENT - Alert conditions datasets are created from historic data taken from actual incidents for which the alert condition datasets are to indicate during future operations. A networked computers system including various devices is monitored for alert conditions associated with one, or more, of the devices. The severity of an alert is based on the number of alert conditions met for a given alert conditions dataset. | 2016-01-21 |
20160019109 | METHOD AND SYSTEM FOR PROBLEM MODIFICATION AND PROCESSING - A notification of a problem associated with an application may be received. A difference may be determined between a problem version of the application and an operational version of the application to identify a change associated with the problem. A modification may be performed to the problem version of the application to resolve the problem associated with the change based on determining of the difference. Performing the modification may comprise associating a priority for resolution of the problem. The problem version of the application may be rolled back or rolled forward to the operational version of the application based on the priority for resolution. | 2016-01-21 |
20160019110 | INTEREST RETURN CONTROL MESSAGE - One embodiment provides a system that facilitates processing of error-condition information associated with a content-centric network (CCN) message transmitted over a network. During operation, the system receives, by a first node, a packet that corresponds to a CCN message, where a name for the CCN message is a hierarchically structured variable length identifier (HSVLI) which comprises contiguous name components ordered from a most general level to a most specific level. Responsive to determining that the CCN message triggers an error condition, the system generates an interest return message by pre-pending a data structure to the CCN message, where the data structure indicates the error condition. The system transmits the interest return message to a second node. | 2016-01-21 |
20160019111 | PARTIAL BAD BLOCK DETECTION AND RE-USE USING EPWR FOR BLOCK BASED ARCHITECTURES - Systems and methods for partial bad block reuse may be provided. Data may be copied from a block of a first memory to a block of a second memory. A post write read error may be detected in a first portion the data copied to the block of the second memory without detection of a post write read error in a second portion of the data copied to the block of the second memory. The block of the second memory may be determined to be a partial bad block usable for storage in response to detection of the post write read error in the first portion of the data but not in the second portion of the data. | 2016-01-21 |
20160019112 | INCREMENTAL ERROR DETECTION AND CORRECTION FOR MEMORIES - A device and method for incrementally updating the error detecting and correcting bits for an error corrected block of data in a cross point memory array is disclosed. When an error corrected block of data is modified, only the modified data bits and the incrementally updated error detecting and correcting bits are changed in the cross point memory device for improved performant and reduced impact to device endurance. | 2016-01-21 |
20160019113 | MEMORY SYSTEM - A memory system includes a controlling unit that configured to control data transfer between the first and the second memory. The controlling unit executes copy processing for, after reading out data stored in a first page of the second memory to the first memory, writing the data in a second page of the second memory, determines, when executing the copy processing, whether the error correction processing for the data read out from the first page is successful, stores, when the error correction processing is successful, corrected data in the first memory and writes the corrected data in the second page, and reads out, when the error correction processing is unsuccessful, the data from the first page to the first memory and writes the data not subjected to the error correction processing in the second page. | 2016-01-21 |
20160019114 | METHODS AND SYSTEMS FOR STORING DATA IN A REDUNDANT MANNER ON A PLURALITY OF STORAGE UNITS OF A STORAGE SYSTEM - Described herein are techniques for storing data in a redundant manner on a plurality of storage units of a storage system. While all of the storage units are operating without failure, only error-correction blocks are stored on a first one of the storage units, while a combination of data blocks and error-correction blocks are stored on a second one of the storage units. Upon failure of the second storage unit, one or more data blocks and one or more error-correction blocks formerly stored on the second storage unit are reconstructed, and the one or more reconstructed data blocks and the one or more reconstructed error-correction blocks are stored on the first storage unit. | 2016-01-21 |
20160019115 | FLEXIBLE DATA STORAGE SYSTEM - Methods and systems for managing and locating available storage space in a system comprising data files stored in a plurality of storage devices and configured in accordance with various data storage schemes (mirroring, striping and parity-striping). A mapping table associated with each of the plurality of storage devices is used to determine the available locations and amount of available space in the storage devices. The data storage schemes for one or more of the stored data files are changed to a basic storage mode when the size of a new data file configured in accordance with an assigned data storage scheme exceeds the amount of available space. The configured new data file is stored in accordance with the assigned data storage scheme in one or more of the available locations and the locations of the new data file are recorded. | 2016-01-21 |
20160019116 | APPARATUS AND METHOD FOR RECOVERING AN INFORMATION HANDLING SYSTEM FROM A NON-OPERATIONAL STATE - A method recovers an information handling system (IHS) from a non-operational state. The method includes determining if the non-operational state of the IHS has occurred. In response to determining that the non-operational state of the IHS has occurred, a basic input-output (BIOS) recovery device is identified as being coupled to an embedded controller. In response to identifying that the BIOS recovery device is coupled to the embedded controller, an IHS type is transmitted to the BIOS recovery device. The BIOS recovery device is signaled to determine if the BIOS recovery device contains a BIOS payload corresponding to the IHS type. In response to determining that the BIOS recovery device contains the BIOS payload corresponding to the IHS type, the BIOS recovery device is triggered to transmit the BIOS payload to the embedded controller. The IHS is triggered to restart using the new BIOS payload. | 2016-01-21 |
20160019117 | CREATING CUSTOMIZED BOOTABLE IMAGE FOR CLIENT COMPUTING DEVICE FROM BACKUP COPY - According to certain aspects, a method of creating customized bootable images for client computing devices in an information management system can include: creating a backup copy of each of a plurality of client computing devices, including a first client computing device; subsequent to receiving a request to restore the first client computing device to the state at a first time, creating a customized bootable image that is configured to directly restore the first client computing device to the state at the first time, wherein the customized bootable image includes system state specific to the first client computing device at the first time and one or more drivers associated with hardware existing at time of restore on a computing device to be rebooted; and rebooting the computing device to the state of the first client computing device at the first time from the customized bootable image. | 2016-01-21 |
20160019118 | PRESENTING A FILE SYSTEM FOR A FILE CONTAINING ITEMS - What is disclosed is a method of operating a volume access system. The method includes processing at least a first file to generate a file system view of the first file comprising a plurality of items within the first file, and providing the file system view of the first file over a network interface as a hierarchical data volume. The method also includes receiving an access request for a requested item of the hierarchical data volume over the network interface, and in response, providing access to a first item of the plurality of items within the first file corresponding to the requested item. | 2016-01-21 |
20160019119 | PRIORITIZING BACKUP OF FILES - Approaches for prioritizing backup of files are described. In one example, a backup prioritizing parameter for a file shortlisted for backup is identified. Once the backup prioritizing parameter is identified, a position of the file for placing within a backup queue for backup, is subsequently determined based on the backup prioritizing parameter. | 2016-01-21 |
20160019120 | STORAGE APPARATUS AND STORAGE APPARATUS MIGRATION METHOD - A source remote copy configuration in a source storage system is migrated to a destination storage system as a destination remote copy configuration. The destination primary storage apparatus of the destination storage system defines a virtual volume mapped to the primary volume provided by the source primary storage apparatus which is a storage area of the virtual volume; takes over a first identifier of the primary volume to the virtual volume; transfers, when the virtual volume receives an access request, the access request to the source primary storage apparatus to write data in the primary volume; and takes over the first identifier from the virtual volume to another primary volume provided by the destination primary storage apparatus, after completion of copy of data from primary volume of the source primary storage apparatus into primary volume of the destination primary storage apparatus and secondary volume of the destination secondary storage apparatus. | 2016-01-21 |
20160019121 | DATA TRANSFERS BETWEEN CLUSTER INSTANCES WITH DELAYED LOG FILE FLUSH - Techniques for processing changes in a cluster database system are provided. A first instance in the cluster transfers a data block to a second instance in the cluster before a redo record that stores one or more changes that the first instance made to the data block is durably stored. The first instance also transfers, to the second instance, a block change timestamp that indicates when a redo record for the one or more changes was generated by the first instance. The first instance also separately sends, to the second instance, a last store timestamp that indicates when the last redo record that was durably stored was generated by the first instance. The block change timestamp and the last store timestamp are used by the second instance when creating redo records for changes (made by the second instance) that depend on the redo record generated by the first instance. | 2016-01-21 |
20160019122 | SYSTEM AND METHOD FOR MAINTAINING SERVER DATA INTEGRITY - The System Integrity Guardian can protect any type of object and repairs and restores the system back to its original state of integrity. The Client component is the user interface for administering the System Integrity Guardian environment. An administrator can determine which servers to protect, which objects to protect, and what actions will be taken when an event that breaches integrity occurs. The Monitor Agent component is the watchdog of the System Integrity Guardian that captures and addresses any event that occurs on any object being protected. The Server component includes the server and the Protected Object Central Repository. The authoritative copies are maintained, digital signatures are created and stored, objects are validated, and communication between the three units is performed. | 2016-01-21 |
20160019123 | FAULT TOLERANCE FOR COMPLEX DISTRIBUTED COMPUTING OPERATIONS - A method for enabling a distributed computing system to tolerate system faults during the execution of a client process. The method includes instantiating an execution environment relating to the client process; executing instructions within the execution environment, the instructions causing the execution environment to issue further instructions to the distributing computing system, the further instructions relating to actions to be performed with respect to data stored on the distributed computing system. An object interface proxy receives the further instructions and monitors the received to determine if the execution environment is in a desired save-state condition; and, if so, save a current state of the execution environment in a data store. | 2016-01-21 |
20160019124 | IN-BAND RECOVERY MECHANISM FOR I/O MODULES IN A DATA STORAGE SYSTEM - Technology is disclosed for recovering I/O modules in a storage system using in-band alternate control path (ACP) architecture (“the technology”). The technology enables a storage server to transmit control commands, e.g., for recovering an I/O module, to the I/O module over a data path that is typically used to transmit data commands. The control commands are typically transmitted using ACP that is separate from the data path. By enabling transmission of control commands over the data path, the technology eliminates the need for separate medium for ACP, at least in part, to transmit the control commands. The technology can be implemented in a pure in-band ACP mode, which supports recovering an I/O module of a storage shelf in which at least one I/O module is responsive, and/or in a mixed in-band ACP mode, which supports recovery of I/O modules of a storage shelf in which all I/O modules are non-responsive. | 2016-01-21 |
20160019125 | DYNAMICALLY CHANGING MEMBERS OF A CONSENSUS GROUP IN A DISTRIBUTED SELF-HEALING COORDINATION SERVICE - Systems, methods, and computer program products for managing a consensus group in a distributed computing cluster, by determining that an instance of an authority module executing on a first node, of a consensus group of nodes in the distributed computing cluster, has failed; and adding, by an instance of the authority module on a second node of the consensus group, a new node to the consensus group to replace the first node. The new node is a node in the computing cluster that was not a member of the consensus group at the time the instance of the authority module executing on the first node is determined to have failed. | 2016-01-21 |
20160019126 | FAILURE RECOVERY APPARATUS OF DIGITAL LOGIC CIRCUIT AND METHOD THEREOF - Exemplary embodiments of the present invention relate to a failure recovery apparatus of digital logic circuit and method thereof when a fault occurs in the digital logic circuit. A failure recovery apparatus according to an embodiment of the present invention comprises: a fault detection block configured to determine fault occurrence by comparing output results of a plurality of digital logic circuit which perform the same operation using a clock having a first cycle; and a failure recovery block configured to perform a failure recovery operation of the plurality of digital logic circuit by using a clock having a second cycle which is longer than the first cycle when it is determined as that a fault occurs. According to exemplary embodiments of the present invention, when a fault occurs in digital logic circuits due to external factors, it provides high reliability in failure recovery of the digital logic circuits. | 2016-01-21 |
20160019127 | Methods and Systems for Die Failure Testing - The disclosed method includes, at a storage controller of a storage system, receiving host instructions to modify configuration settings corresponding to a first memory portion of a plurality of memory portions. The method includes, in response to receiving the host instructions to modify the configuration settings, identifying the first memory portion from the host instructions and modifying the configuration settings corresponding to the first memory portion, in accordance with the host instructions. The method includes, after modifying the configuration settings corresponding to the first memory portion, sending one or more commands to perform memory operations having one or more physical addresses corresponding to the first memory portion and receiving a failure notification indicating failed performance of at least a first memory operation of the one or more memory operations. The method includes, in response to receiving the failure notification, executing one or more error recovery mechanisms. | 2016-01-21 |
20160019128 | SYSTEMS AND METHODS PROVIDING MOUNT CATALOGS FOR RAPID VOLUME MOUNT - Systems and methods which provide mount catalogs to facilitate rapid volume mount are shown. A mount catalog of embodiments may be provided for each aggregate containing volumes to be mounted by a takeover node of a storage system. The mount catalog may comprise a direct storage level, such as a DBN level, based mount catalog. Such mount catalogs may be maintained in a reserved portion of the storage devices containing a corresponding aggregate and volumes, wherein the storage device reserved portion is known to a takeover node. In operation according to embodiments, a HA pair takeover node uses a mount catalog to access the blocks used to mount volumes of a HA pair partner node prior to a final determination that the partner node is in fact a failed node and prior to onlining the aggregate containing the volumes. | 2016-01-21 |
20160019129 | UNIFICATION OF DESCRIPTIVE PROGRAMMING AND OBJECT REPOSITORY - A computer device may include logic configured to provide a centralized library for descriptive programming and other types of object descriptions to a testing script engine. The descriptive programming library may store test object descriptions for test objects associated with an application under testing. The logic may be further configured to provide a unification layer over all the object description types and to provide inheritance among the objects at the unification layer. The logic may be further configured to store a test object description, associated with a test object, in the descriptive programming library; identify a reference to the test object in a descriptive programming statement associated with the testing script engine; access the stored test object description in the descriptive programming library based on the identified reference to the test object; and identify an application object, associated with the application under testing, based on the stored test object description. | 2016-01-21 |
20160019130 | TRACKING CORE-LEVEL INSTRUCTION SET CAPABILITIES IN A CHIP MULTIPROCESSOR - Techniques described herein generally relate to a task management system for a chip multiprocessor having multiple processor cores. The task management system tracks the changing instruction set capabilities of each processor core and selects processor cores for use based on the tracked capabilities. In this way, a processor core with one or more failed processing elements can still be used effectively, since the processor core may be selected to process instruction sets that do not use the failed processing elements. | 2016-01-21 |
20160019131 | Methods and Arrangements to Collect Data - Methods and arrangements to collect data related to the state or conditions of a system are described herein. Embodiments may comprise a data identifier to identify data to collect in response to an event and a data collector to collect the identified data. The data collector may comprise firmware, code in ROM, a state machine, and/or other logic, and the data identifier may also comprise firmware, code in ROM, a state machine, and/or other logic that may access information and/or code in a file or other data storage to identify the data to collect. The data storage may comprise information and/or code to identify the location of data to collect and, in some embodiments, the sequence with which to collect the data. For example, such a file may comprise an address or address range within memory of a specific component of the system such as a memory controller. | 2016-01-21 |
20160019132 | VALIDATING CODE OF AN EXTRACT, TRANSFORM AND LOAD (ETL) TOOL - An approach for validating code for an extract, transform and load (ETL) tool is provided. Naming, coding, and performance standards for the code is received. The code is exported to a job definition file and parsed. Violations of the standards are determined by determining the parsed code does not match the standards. A report identifying the violations is generated. Based on a review of the report and a rework of the code to comply with the standards, the reworked code is exported to another job definition file and parsed, the parsed reworked code is determined to not include the violations of the standards, and a second report is generated that indicates that the reworked code does not include the violations. An approval of the reworked code is received based on the second report. | 2016-01-21 |
20160019133 | METHOD FOR TRACING A COMPUTER SOFTWARE - A system and method for determining execution trace differences in a computer-implemented software application is provided herein. A software application under analysis is executed at least twice, thereby generating first and second execution trace and associated first and second sets of execution data describing all the necessary data gained by program instrumentation at each program statement of source or bytecode level. These data are stored for at least two executions of the software application. This data is then compared to determine a set of differences between the first and second executions of the program. The set of differences may contain statement coverage data, execution trace data, variable values or influences among program instructions. The differences can be arranged according into historical order. The differences then can be analyzed to identify the location of the fault in the program or to map the related code to a feature in unknown code. | 2016-01-21 |
20160019134 | ERROR ASSESSMENT TOOL - Embodiments of the invention are directed to a system, method, and computer program product for assessing error notifications associated with one or more application functions. An exemplary embodiment includes receiving an indication of an error associated with at least one function in an application; extracting information associated with the application from one or more sources; and initiating a presentation of a second user-interface to enable a user to resolve the error, wherein the second user-interface comprises at least one of an aggregation of the information extracted from the one or more sources. | 2016-01-21 |
20160019135 | OPTIMAL TEST SUITE REDUCTION AS A NETWORK MAXIMUM FLOW - A novel approach to test-suite reduction based on network maximum flows. Given a test suite T and a set of test requirements R, the method identifies a minimal set of test cases which maintains the coverage of test requirements. The approach encodes the problem with a bipartite directed graph and computes a minimum cardinality subset of T that covers R as a search among maximum flows, using the classical Ford-Fulkerson algorithm in combination with efficient constraint programming techniques. Test results have shown that the method outperforms the Integer Linear Programming (ILP) approach by 15-3000 times, in terms of the time needed to find the solution. At the same time, the method obtains the same reduction rate as ILP, because both approaches compute optimal solutions. When compared to the simple greedy approach, the method takes on average 30% more time and produces from 5% to 15% smaller test suites. | 2016-01-21 |
20160019136 | NON-VOLATILE MEMORY INTERFACE - Apparatuses, systems, methods, and computer program products are disclosed for a memory controller. An apparatus includes a volatile memory medium located on a memory module. An apparatus includes a non-volatile memory medium located on a memory module. A memory controller is located on a memory module. A memory controller may be configured to provide access to at least a non-volatile memory medium over a direct wire interface with a processor. | 2016-01-21 |
20160019137 | Methods and Systems for Flash Buffer Sizing - The embodiments described herein are used to allocate memory in a storage system. The method includes, at a memory controller in the storage system, determining a current memory allocation for a set of memory devices, wherein the set of memory devices is formatted with a ratio of first storage density designated portions to second storage density designated portions in accordance with the current memory allocation. The method further includes detecting satisfaction of one or more memory reallocation trigger conditions. The method further includes, in response to detecting satisfaction of one or more memory reallocation trigger conditions, modifying the ratio of the first storage density designated portions to the second storage density designated portions in the set of memory devices to generate a second memory allocation for the set of memory devices. | 2016-01-21 |
20160019138 | MEMORY MODULE AND SYSTEM AND METHOD OF OPERATION - A memory module comprises a volatile memory subsystem configured to coupled to a memory channel in computer system and capable of serving as main memory for the computer system, a non-volatile memory subsystem providing storage for the computer system, and a module controller coupled to the volatile memory subsystem, the non-volatile memory subsystem, and the C/A bus. The module controller reads first data from the non-volatile memory subsystem in response to a Flash access request received via the memory channel, and causes at least a portion of the first data to be written into the volatile memory subsystem in response to a dummy write memory command received via the C/A bus. The module control device includes status registers accessible by the computer system via the memory bus. | 2016-01-21 |
20160019139 | MEMORY CONTROLLER AND METHOD FOR INTERLEAVING DRAM AND MRAM ACCESSES - A memory system and memory controller for interleaving volatile and non-volatile memory accesses are described. In the memory system, the memory controller is coupled to the volatile and non-volatile memories using a shared address bus. Activate latencies for the volatile and non-volatile memories are different, and registers are included on the memory controller for storing latency values. Additional registers on the memory controller store precharge latencies for the memories as well as page size for the non-volatile memory. A memory access sequencer on the memory controller asserts appropriate chip select signals to the memories to initiate operations therein. | 2016-01-21 |
20160019140 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019141 | METHOD AND APPARATUS FOR MAPPING A LOGICAL ADDRESS BETWEEN MEMORIES OF A STORAGE DRIVE BASED ON WRITE FREQUENCY RANKINGS - A storage drive including a first and second memories and a controller. The second memory has a write cycle lifetime that is less than a write cycle lifetime of the first memory. Each of the first and second memories includes solid-state memory. The controller: determines a write frequency for a first logical address; and based on the write frequency, determines a write frequency ranking for the first logical address. The write frequency ranking is based on a weighted time-decay average of write counts or an average of elapsed times of write cycles. The controller also: determines whether the write frequency ranking is greater than a lowest write frequency ranking of logical addresses of the first memory; and if the write frequency ranking of the first logical address is greater, maps the logical address with the lowest write frequency ranking in the first memory to the second memory. | 2016-01-21 |
20160019142 | Method of collecting garbage blocks in a solid state drive - A method of collecting garbage blocks in a solid state drive includes collecting a garbage block of a multiple level cell flash memory, selecting a spare block as a target block, copying effective data of the garbage block to a physical cell of the target block, searching for unprogrammed physical pages of the physical cell of the target block, using dummy data to complete programming of the unprogrammed physical pages of the physical cell, deleting the effective data in the garbage block, and recycling the garbage block to be a new spare block. | 2016-01-21 |
20160019143 | GARBAGE COLLECTION FOR SELECTING REGION TO RECLAIM ON BASIS OF UPDATE TO REFERENCE SOURCE INFORMATION - In a GC processing in which a memory area is managed by being divided, collection efficiency of an area is further optimized. In order to realize the technology, a calculator including an arithmetic unit and a memory includes a storage unit which stores reference source information of data which is stored in a plurality of storage areas which are allocated to the memory in each of the storage areas; and a control unit which determines a storage area in which updated reference source information is different from reference source information which is recorded in the storage unit to be an area as a release target. | 2016-01-21 |
20160019144 | SYSTEMS AND/OR METHODS FOR ENABLING STORAGE TIER SELECTION AND USAGE AT A GRANULAR LEVEL - Certain example embodiments relate to memory management techniques that enable users to “pin” elements to particular storage tiers (e.g., RAM, SSD, HDD, tape, or the like). Once pinned, elements are not moved from tier-to-tier during application execution. A memory manager, working with at least one processor, receives requests to store and retrieve data during application execution. Each request is handled using a non-transitory computer readable storage medium (rather than a transitory computer readable storage medium), if the associated data is part of a data cache that is pinned to the non-transitory computer readable storage medium, or if the associated data itself is pinned to the non-transitory computer readable storage medium. If neither condition applies, the memory manager determines which one of the non-transitory and the transitory computer readable storage mediums should be used in handling the respective received request, and handles the request accordingly. | 2016-01-21 |
20160019145 | STORAGE SYSTEM AND CACHE CONTROL METHOD - A receiving controller which receives a read request out of first and second storage controllers transfers the read request to an associated controller which is associated with a read source storage area out of the first and second storage controllers when the receiving controller is not the associated controller. It is however the receiving controller that reads the read-target data from a read source storage device, writes the read-target data to a cache memory of the receiving controller, and transmits the read-target data written in the cache memory of the receiving controller to a host apparatus. | 2016-01-21 |
20160019146 | WRITE BACK COORDINATION NODE FOR CACHE LATENCY CORRECTION - A coordinating node acts as a write back cache, isolating local cache storage endpoints from latencies associated with accessing geographically remote cloud cache and storage resources. | 2016-01-21 |
20160019147 | DECOUPLING DATA AND METADATA IN HIERARCHICAL CACHE SYSTEM - A coordinating node creates virtual storage from a hierarchy of local and remote cache storage resources by maintaining global logical block address (LBA) metadata maps. A size of the metadata maps at each level of the hierarchy is independent of an amount of data allocated to a respective cache store at each level. | 2016-01-21 |
20160019148 | HASH DISCRIMINATOR PROCESS FOR HIERARCHICAL CACHE SYSTEM - A coordinating node maintains globally consistent logical block address (LBA) metadata for a hierarchy of caches, which may be implemented in local and cloud based storage resources. Associated storage endpoints initially determine a hash associated with each access request, but forward the access request to the coordinating node to determine a unique discriminator for each hash. | 2016-01-21 |
20160019149 | HISTORY BASED MEMORY SPECULATION FOR PARTITIONED CACHE MEMORIES - A cache memory that selectively enables and disables speculative reads from system memory is disclosed. The cache memory may include a plurality of partitions, and a plurality of registers. Each register may be configured to stored data indicative of a source of returned data for previous requests directed to a corresponding partition. Circuitry may be configured to receive a request for data to a given partition. The circuitry may be further configured to read contents of a register corresponding to the given partition, and initiate a speculative read dependent upon the contents of the register. | 2016-01-21 |
20160019150 | INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE AND CONTROL PROGRAM OF INFORMATION PROCESSING DEVICE - An information processing device comprising a plurality of nodes, each nodes comprising an arithmetic operation device configured to execute an arithmetic process, and a main memory which stores data, wherein each of arithmetic operation devices belonging to each of the plurality of nodes is configured to read a target data of which the arithmetic operation unit executes the arithmetic operation from a storage device except the main memory, based on a first address information indicating a storage position in the storage device, and write the target data into the main memory of own node. | 2016-01-21 |
20160019151 | USING L1 CACHE AS RE-ORDER BUFFER - A method is shown that eliminates the need for a dedicated reorder buffer register bank or memory space in a multi level cache system. As data requests from the L2 cache may be returned out of order, the L1 cache uses it's cache memory to buffer the out of order data and provides the data to the requesting processor in the correct order from the buffer. | 2016-01-21 |
20160019152 | PREFETCH LIST MANAGEMENT IN A COMPUTER SYSTEM - Method and apparatus for tracking a prefetch list of a list prefetcher associated with a computer program in the event the list prefetcher cannot track the computer program. During a first execution of a computer program, the computer program outputs checkpoint indications. Also during the first execution of the computer program, a list prefetcher builds a prefetch list for subsequent executions of the computer program. As the computer program executes for the first time, the list prefetcher associates each checkpoint indication with a location in the building prefetch list. Upon subsequent executions of the computer program, if the list prefetcher cannot track the prefetch list to the computer program, the list prefetcher waits until the computer program outputs the next checkpoint indication. The list prefetcher is then able to jump to the location of the prefetch list associated with the checkpoint indication. | 2016-01-21 |
20160019153 | PRE-LOADING CACHE LINES - A system for caching is configured for a pending lock state of a cache line, pre-loading the cache line into cache memory, and locking the cache line to prevent eviction of the cache line from the cache memory. The cache line is associated with instructions or data, and the pre-loading of the cache line may include loading the cache line into the cache memory before an algorithm relying on the instructions or data needs them. The pre-loading of a cache line associated with instructions may be done without execution of the instructions. The pending lock state of the cache line may be achieved by configuring the cache system to know that, when a cache line associated with an address is loaded into the cache memory, it should lock the cache line. The locking of the cache line may be done by promoting the pending lock state to a locked state. | 2016-01-21 |
20160019154 | PREFETCH LIST MANAGEMENT IN A COMPUTER SYSTEM - Method and apparatus for tracking a prefetch list of a list prefetcher associated with a computer program in the event the list prefetcher cannot track the computer program. During a first execution of a computer program, the computer program outputs checkpoint indications. Also during the first execution of the computer program, a list prefetcher builds a prefetch list for subsequent executions of the computer program. As the computer program executes for the first time, the list prefetcher associates each checkpoint indication with a location in the building prefetch list. Upon subsequent executions of the computer program, if the list prefetcher cannot track the prefetch list to the computer program, the list prefetcher waits until the computer program outputs the next checkpoint indication. The list prefetcher is then able to jump to the location of the prefetch list associated with the checkpoint indication. | 2016-01-21 |
20160019155 | ADAPTIVE MECHANISM TO TUNE THE DEGREE OF PRE-FETCHES STREAMS - According to one general aspect, a method may include monitoring a plurality of pre-fetch cache requests associated with a data stream. The method may also include evaluating an accuracy of the pre-fetch cache requests. The method may further include, based at least in part upon the accuracy of the pre-fetch cache requests, adjusting a maximum amount of data that is allowably pre-fetched in excess of a data stream's current actual demand for data. | 2016-01-21 |
20160019156 | DISK CACHE ALLOCATION - Implementations disclosed herein provide a method comprising segregating a disk cache into a plurality of allocation units, and allocating the plurality of allocation units out-of-order. | 2016-01-21 |
20160019157 | Method and Apparatus For Flexible Cache Partitioning By Sets And Ways Into Component Caches - Aspects include computing devices, systems, and methods for partitioning a system cache by sets and ways into component caches. A system cache memory controller may manage the component caches and manage access to the component caches. The system cache memory controller may receive system cache access requests specifying component cache identifiers, and match the component cache identifiers with records correlating traits of the component cache identifiers with in a component cache configuration table. The component cache traits may include a set shift trait, set offset trait, and target ways, which may define the locations of the component caches in the system cache. The system cache memory controller may also receive a physical address for the system cache in the system cache access request, determine an indexing mode for the component cache, and translate the physical address for the component cache. | 2016-01-21 |
20160019158 | Method And Apparatus For A Shared Cache With Dynamic Partitioning - Aspects include computing devices, systems, and methods for dynamically partitioning a system cache by sets and ways into component caches. A system cache memory controller may manage the component caches and manage access to the component caches. The system cache memory controller may receive system cache access requests and reserve locations in the system cache corresponding to the component caches correlated with component cache identifiers of the requests. Reserving locations in the system cache may activate the locations in the system cache for use by a requesting client, and may also prevent other client from using the reserved locations in the system cache. Releasing the locations in the system cache may deactivate the locations in the system cache and allow other clients to use them. A client reserving locations in the system cache may change the amount of locations it has reserved within its component cache. | 2016-01-21 |
20160019159 | STORAGE SYSTEM AND DATA STORING METHOD - Provided is a storage system including: a storage medium including a plurality of physical storage areas having an upper limit number of rewrites, and a medium controller that controls I/O (input/output) of data to/from the plurality of physical storage areas; and a storage controller connected to the storage medium, wherein when any of the physical storage areas is not allocated to a write destination logical storage area among a plurality of logical storage areas, the medium controller allocates a vacant physical storage area among the plurality of physical storage areas to the write destination logical storage area and writes write target data to the allocated vacant physical storage area, and the plurality of logical storage areas includes an available logical area group determined based on a relationship between an available capacity of a logical storage capacity and a rewrite frequency of the plurality of physical storage areas. | 2016-01-21 |
20160019160 | Methods and Systems for Scalable and Distributed Address Mapping Using Non-Volatile Memory Modules - In a method to provide scalable and distributed address mapping in a storage device, a host command that specifies an operation to be performed and a logical address corresponding to a portion of memory within the storage device is received or accessed. A storage controller of the storage device maps the specified logical address to a first subset of a physical address, using a first address translation table, and identifies an NVM module of the plurality of NVM modules, in accordance with the first subset of a physical address. The method further includes, at the identified NVM module, mapping the specified logical address to a second subset of the physical address, using a second address translation table, identifying the portion of non-volatile memory within the identified NVM module corresponding to the specified logical address, and executing the specified operation on the portion of memory in the identified NVM module. | 2016-01-21 |
20160019161 | PROGRAMMABLE ADDRESS MAPPING AND MEMORY ACCESS OPERATIONS - Programmable address mapping and memory access operations are disclosed. An example apparatus includes an address translator to translate a first host physical address to a first intermediate address. The example apparatus also includes a programmable address decoder to decode the first intermediate address to a first hardware memory address of a first addressable memory location in a memory, the programmable address decoder to receive a first command to associate the first host physical address with a second addressable memory location in the memory by changing a mapping between the first intermediate address and a second hardware memory address of the second addressable memory location. | 2016-01-21 |
20160019162 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019163 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019164 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019165 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019166 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 2016-01-21 |
20160019167 | SOCIAL CACHE - Various embodiments relating to a social cache replacement policy are described. The techniques of the present invention disclosed utilize social network properties to guide a cache replacement policy executed by a social networking platform system. In one embodiment, a method is provided for determining a queue location to cache a data item based on a popularity score computed from social network properties. In one embodiment, a method is provided for computing the popularity score by incorporating a user's social network properties and the user's friends' social network properties. In embodiments, the popularity score may be computed using a plurality of social network properties, which may include social network properties associated with (i) the user, (ii) the consumer(s), and/or (iii) the data item(s). In embodiments, a plurality of popularity scores are maintained in a user-score database, where the plurality of popularity scores are periodically updated using historical data. | 2016-01-21 |
20160019168 | On-Demand Shareability Conversion In A Heterogeneous Shared Virtual Memory - The aspects include systems and methods of managing virtual memory page shareability. A processor or memory management unit may set in a page table an indication that a virtual memory page is not shareable with an outer domain processor. The processor or memory management unit may monitor for when the outer domain processor attempts or has attempted to access the virtual memory page. In response to the outer domain processor attempting to access the virtual memory page, the processor may perform a virtual memory page operation on the virtual memory page. | 2016-01-21 |