Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


05th week of 2016 patent applcation highlights part 48
Patent application numberTitlePublished
20160034255Arithmetic Devices, Montgomery Parameter Calculation Method and Modular Multiplication Method Thereof - Disclosed are arithmetic devices, a method of a Montgomery parameter calculation thereof and a Montgomery multiplication method thereof. The method of the Montgomery parameter calculation of the arithmetic devices includes detecting a position of a most significant bit (MSB) of a modulus, calculating an initial value using position information about the detected MSB, and calculating an intermediate value and a Montgomery parameter by repeatedly performing a Montgomery addition or a Montgomery multiplication with respect to the initial value.2016-02-04
20160034256FAST INTEGER DIVISION - Embodiments disclosed pertain to apparatuses, systems, and methods for fast integer division. Disclosed embodiments pertain to an integer divide circuit to divide a dividend by a divisor and produce multiple quotient bits per iteration. In some embodiments, the fast integer divider may include a partial remainder register initialized with the dividend. Further, the fast integer divider circuit may include a plurality of adders, where each adder subtracts a multiple of the divisor from the current value in the partial remainder register. A logic block coupled to each of the adders, determines multiple quotient bits at each iteration based on the subtraction results.2016-02-04
20160034257GENERATING A HASH USING S-BOX NONLINEARIZING OF A REMAINDER INPUT - A processor includes a hash register and a hash generating circuit. The hash generating circuit includes a novel programmable nonlinearizing function circuit as well as a modulo-2 multiplier, a first modulo-2 summer, a modulor-2 divider, and a second modulo-2 summer. The nonlinearizing function circuit receives a hash value from the hash register and performs a programmable nonlinearizing function, thereby generating a modified version of the hash value. In one example, the nonlinearizing function circuit includes a plurality of separately enableable S-box circuits. The multiplier multiplies the input data by a programmable multiplier value, thereby generating a product value. The first summer sums a first portion of the product value with the modified hash value. The divider divides the resulting sum by a fixed divisor value, thereby generating a remainder value. The second summer sums the remainder value and the second portion of the input data, thereby generating a hash result.2016-02-04
20160034258System and Methods of Generating Build Dependencies Radiator - A method of generating an information radiator indicating module dependency relationships includes retrieving a collection of modules forming an application program from an integration server; determining in the collection a plurality of base modules and one or more modules dependent on an output of each base module; and displaying on a web page a dependency layout of the collection of modules, the dependency layout based upon a plurality of dependency relationships between each of the determined plurality of base modules and the one or more modules dependent on the output of each base module.2016-02-04
20160034259SEQUENCE-PROGRAM-COMPONENT CREATION PROGRAM AND SEQUENCE- PROGRAM-COMPONENT CREATION DEVICE - To provide a sequence-program-component creation program that causes a computer to perform a searching step of searching an overall circuit of a sequence program for a common logic part and extracting a logic pattern that appears in common in a circuit pattern arranged in the common logic part as a common circuit pattern, a component-candidate displaying step of displaying an extracted common circuit pattern as a candidate for a program component, a component registration setting step of registering a common circuit pattern selected by a user from candidates for the program component, and a replacing step of replacing the common logic part of the sequence program with the program component.2016-02-04
20160034260ARTIFACTS FOR COMMUNICATIONS SYSTEMS - An application development platform transmits to a content provider system instructions that provide a user interface for developing an application that specifies a first multi-step communication flow between a communications device and a communications system. The platform receives from parameters of a program functionality for inclusion in the application, and selects one or more recommended program modules based on the parameters. The platform transmits instructions that provide a user interface for displaying the one or more recommended program modules. The platform receives data indicating a user selection of a particular program module. In response, the platform transmits instructions that provide a user interface for enabling user configuration of the particular program module. The platform receives modified parameters of the particular program module and determines a second multi-step communication flow between the communications device and the communications system based on the first multi-step communication flow and the modified parameters.2016-02-04
20160034261Augmenting Programming Languages with a Type System - Described is a technology by which metadata augments a programming language such as JavaScript. Software components such as application programming interfaces are associated with metadata. When a software component is selected for use, such as when putting together a computer program in a graphical programming environment, its corresponding metadata is accessed. The metadata may be used to validate the usage of the software component, such as to validate a constraint associated with a value, provide a default value, validate a value's type, and/or determine whether a value is required. Validation may also determine whether data output by one software component is of a type that is appropriate for input by another software component. In addition to validation via type metadata, the metadata may provide descriptive information about the selected software component, such as to assist the programmer and/or provide further information to the programming environment.2016-02-04
20160034262TRANSMISSION POINT PATTERN EXTRACTION FROM EXECUTABLE CODE IN MESSAGE PASSING ENVIRONMENTS - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue.2016-02-04
20160034263INFORMATION PROCESSING APPARATUS, FUNCTION EXTENSION METHOD FOR INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus provided with an extension unit for performing control to download and install an extension program for extending functionality, the extension unit comprises: a first control unit that performs control to download from an external server an introduction program that provides information about the extension program which can be downloaded, and installs the introduction program; an obtaining unit that obtains, from the installed introduction program, information about the extension program; a provision unit that provides a screen for displaying the obtained information about the extension program and for receiving an instruction to install the extension program; and a second control unit that, in response to the instruction by a user via the screen, performs control to use key information included in the obtained information to download and install the extension program.2016-02-04
20160034264INFORMATION PROCESSING APPARATUS, PROGRAM MANAGEMENT METHOD FOR INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An information processing apparatus provided with an extension unit, the extension unit comprises a unit that performs control to download an introduction program that provides information about an extension program for extending functionality which can be downloaded and to install the introduction program; a unit that obtains, from the installed introduction program, information about an extension program which can be downloaded; a unit that performs control to download the extension program that can be downloaded and install the extension program that can be downloaded, based on the information obtained from the introduction program, in response to receiving an instruction to install the extension program which can be downloaded; and a unit that performs control to uninstall the installed extension program, based on limitation information for the extension program, which is included in the information obtained from the introduction program.2016-02-04
20160034265Method for Bulk App Loading on Mobile Devices - A method of quickly installing a set of apps on a used smartphone, activating them by a process that does not require the user to set up multiple accounts, and a method of financing the process.2016-02-04
20160034266DYNAMIC PLUGIN(S) FOR CLOUD APPLICATION(S) - Techniques are described herein that are capable of dynamically installing plugin(s) for application(s). An agent plugin is caused to run in a deployment of a specified application (e.g., across multiple machines in a cloud environment or “on premises”). The specified application is packaged to include the agent plugin. The agent plugin is used to install designated plugin(s) dynamically based on configuration information regarding the specified application. The configuration information indicates that the designated plugin(s) are to be installed in response to the specified application being deployed.2016-02-04
20160034267LIGHTWEIGHT APPLICATION DEPLOYMENT - The present disclosure describes methods, systems, and computer program products for providing a lightweight deployment of mobile cloud applications. A computer-implemented method comprises: receiving, at a server and from a remote client device, a first request to create a frame for the application; storing, by the server, the frame of the application in a repository; generating, by the server, an identifier associated with the frame and the repository; initiating, by the server, a copying of the repository to a workspace; and receiving, by the repository or the workspace and from the remote client, a pushing command including the identifier to update the frame stored in the repository or the workspace with application data associated with a created, modified or deleted version of the application.2016-02-04
20160034268Managing Firmware Updates - A system and method are disclosed herein. The computing system includes a blade enclosure to receive a plurality of cartridges. The computing system also includes an enclosure manager in the blade enclosure to manage the plurality of cartridges. The enclosure manager determines that a cartridge includes updated firmware, and propagates the updated firmware to the plurality of cartridges.2016-02-04
20160034269Updating Software based on Similarities between Endpoints - An apparatus for updating software or changing configuration of software installed in a plurality of terminals, including: a recognition unit for recognizing that the software installed in a first terminal has been successfully updated or the configuration of the software installed in the first terminal has been successfully changed; a selection unit for selecting, in response to the recognition that the software installed in the first terminal has been successfully updated or the configuration of the software installed in the first terminal has been successfully changed, a second terminal in a case where a degree of similarity between a configuration of the first terminal and a configuration of the second terminal is equal to or higher than a predetermined reference value; and an instruction unit for giving an instruction to update the software or to change the configuration of the software installed in the second terminal.2016-02-04
20160034270ESTIMATING LIKELIHOOD OF CODE CHANGES INTRODUCING DEFECTS - Information about a failed build of a computer software project under development can be accessed, where the information describes symptoms of the failed build. Committed change collections can be identified as collections that were committed since a previous successful build of the computer software project. Also, respective scores for the committed change collections can be produced. Each score can represent an estimate of a likelihood that an associated one of the committed change collections is at least a partial cause of the build failure.2016-02-04
20160034271APPARATUS AND METHOD FOR SUPPORTING SHARING OF SOURCE CODE - In a shared change set server, a receiving section receives information on an undetermined change set and information on users sharing the undetermined change set from a terminal device used by a developer who has developed the change set. Subsequently, a shared change set management section prepares a shared change set containing the undetermined change set and information on users sharing the undetermined change set, and stores the shared change set in a shared change set storage section. A transmitting section thereafter transmits information on the shared change set to a terminal device used by a developer sharing the shared change set.2016-02-04
20160034272MANAGING A CATALOG OF SCRIPTS - According to an example, a catalog of scripts may be managed. Management of the catalog of scripts may include the addition of a script description into the catalog of scripts. In one example, the script description may be directly added to the catalog of scripts. In another example, the script description may be added through generation of a merged query of scripts.2016-02-04
20160034273Attributing Authorship to Segments of Source Code - An electronic device accesses a comparison of at least a portion of a second version of a software program to a corresponding portion of a first version of the software program. The device determines an attribution value for a first author based in part on one or more differences between a respective segment of source code in the second version of the software and a corresponding segment of source code in the first version of the software, and determines an attribution value for a second author based in part on one or more differences between the respective segment of source code in the second version of the software and the corresponding segment of source code in the first version of the software. The device displays or sends instructions for displaying indicia of at least one attribution value with the respective segment of source code in the second version.2016-02-04
20160034274COMPLEXITY REDUCTION OF USER TASKS - An exemplary method for reducing complexity of at least one user task includes steps of calculating a complexity metric for the at least one user task; identifying one or more usability issues having a measurable impact on the complexity metric for the at least one user task; determining one or more recommendations for addressing at least one of the one or more usability issues; and displaying a representation of at least one of the one or more usability issues and of at least one of the one or more recommendations. In an illustrative embodiment, implementing any one of the one or more recommendations reduces the impact of the usability issue on the complexity metric of the at least one user task and thereby reduces a complexity of the at least one user task.2016-02-04
20160034275CODING CONVENTION DISCOVERY AND ENFORCEMENT - In general, embodiments of the invention provide an approach to discover and enforce coding conventions among a group of developers. Specifically, source code files for a group of developers are imported from a code repository. The source code files are analyzed to discover the commonly used coding conventions of the group. Convention templates are generated based on these coding conventions. Each convention template is assigned a weighted value, and the convention templates are reviewed and approved based on the weighted value.2016-02-04
20160034276ADAPTIVE INTERFACE FOR CROSS-PLATFORM COMPONENT GENERATION - Exemplary embodiments provide adapted components that may be used by a computer program under different execution contexts. The adapted components may include platform independent source code which may be executed regardless of the execution context in which the component is deployed. Adaptation logic may wrap the execution context independent component in a wrapper. The wrapper may perform data marshaling between the execution context independent component and a computer program invoking the execution context independent component, or the host system on which the computer program is deployed. The execution context independent component may be adapted to a new execution context dynamically the first time that the execution context independent component is invoked in the execution context. Thereafter, the execution context independent component may be invoked statically without the need to re-adapt the component.2016-02-04
20160034277Software Defined SaaS Platform - A system that transforms non-SaaS applications into tenant-aware SaaS applications is disclosed, which analyzes the non SaaS applications to determine which intercepts to external libraries need to be translated into SaaS intercepts that utilize SaaS tenancy services, SaaS operations services, and/or SaaS business services. The system transforms the non-SaaS applications into SaaS applications by providing intercept handlers that call SaaS services on demand when the transformed SaaS application throws a transformed SaaS interrupt.2016-02-04
20160034278PICOENGINE HAVING A HASH GENERATOR WITH REMAINDER INPUT S-BOX NONLINEARIZING - A processor includes a hash register and a hash generating circuit. The hash generating circuit includes a novel programmable nonlinearizing function circuit as well as a modulo-2 multiplier, a first modulo-2 summer, a modulor-2 divider, and a second modulo-2 summer. The nonlinearizing function circuit receives a hash value from the hash register and performs a programmable nonlinearizing function, thereby generating a modified version of the hash value. In one example, the nonlinearizing function circuit includes a plurality of separately enableable S-box circuits. The multiplier multiplies the input data by a programmable multiplier value, thereby generating a product value. The first summer sums a first portion of the product value with the modified hash value. The divider divides the resulting sum by a fixed divisor value, thereby generating a remainder value. The second summer sums the remainder value and the second portion of the input data, thereby generating a hash result.2016-02-04
20160034279BRANCH PREDICTION USING MULTI-WAY PATTERN HISTORY TABLE (PHT) AND GLOBAL PATH VECTOR (GPV) - Embodiments relate to branch prediction using a pattern history table (PHT) that is indexed using a global path vector (GPV). An aspect includes receiving a search address by a branch prediction logic that is in communication with the PHT and the GPV. Another aspect includes starting with the search address, simultaneously determining a plurality of branch predictions by the branch prediction logic based on the PHT, wherein the plurality of branch predictions comprises one of: (i) at least one not taken prediction and a single taken prediction, and (ii) a plurality of not taken predictions. Another aspect includes updating the GPV by shifting an instruction identifier of a branch instruction associated with a taken prediction into the GPV, wherein the GPV is not updated based on any not taken prediction.2016-02-04
20160034280BRANCH PREDICTION USING MULTI-WAY PATTERN HISTORY TABLE (PHT) AND GLOBAL PATH VECTOR (GPV) - Embodiments relate to branch prediction using a pattern history table (PHT) that is indexed using a global path vector (GPV). An aspect includes receiving a search address by a branch prediction logic that is in communication with the PHT and the GPV. Another aspect includes starting with the search address, simultaneously determining a plurality of branch predictions by the branch prediction logic based on the PHT, wherein the plurality of branch predictions comprises one of: (i) at least one not taken prediction and a single taken prediction, and (ii) a plurality of not taken predictions. Another aspect includes updating the GPV by shifting an instruction identifier of a branch instruction associated with a taken prediction into the GPV, wherein the GPV is not updated based on any not taken prediction.2016-02-04
20160034281INSTRUCTION PROCESSING SYSTEM AND METHOD - An instruction processing system is provided. The system includes a central processing unit (CPU), a memory system and an instruction control unit. The CPU is configured to execute one or more instructions of the executable instructions. The memory system is configured to store the instructions. The instruction control unit is configured to, based on location of a branch instruction stored in a track table, control the memory system to provide the instructions to be executed for the CPU. Further, the instruction control unit is also configured to, based on branch prediction of the branch instruction stored in the track table, control the memory system to output one of a fall-through instruction and a target instruction of the branch instruction.2016-02-04
20160034282INSTRUCTION AND LOGIC TO PROVIDE SIMD SECURE HASHING ROUND SLICE FUNCTIONALITY - Instructions and logic provide SIMD secure hashing round slice functionality. Some embodiments include a processor comprising: a decode stage to decode an instruction for a SIMD secure hashing algorithm round slice, the instruction specifying a source data operand set, a message-plus-constant operand set, a round-slice portion of the secure hashing algorithm round, and a rotator set portion of rotate settings. Processor execution units, are responsive to the decoded instruction, to perform a secure hashing round-slice set of round iterations upon the source data operand set, applying the message-plus-constant operand set and the rotator set, and store a result of the instruction in a SIMD destination register. One embodiment of the instruction specifies a hash round type as one of four MD5 round types. Other embodiments may specify a hash round type by an immediate operand as one of three SHA-1 round types or as a SHA-2 round type.2016-02-04
20160034283GLOBAL DATA ESTABLISHMENT FOR STORAGE ARRAYS CONTROLLED BY A PLURALITY OF NODES - A plurality of data arrays are coupled to a plurality of nodes via a plurality of adapters. The plurality of adapters discover the plurality of data arrays during startup, and information about the plurality of data arrays are communicated to corresponding local nodes of the plurality of nodes, wherein the local nodes broadcast the information to other nodes of plurality of nodes. A director node of the plurality of nodes determines which data arrays of the plurality of data arrays are a current set of global metadata arrays, based on the broadcasted information.2016-02-04
20160034284Shutdown Notifications - Shutdown notification techniques are described in which notifications associated with various applications and functionality of a device are presented in conjunction with a shutdown sequence. In one or more implementations, a shutdown of the device may be initiated automatically in response to low power conditions, device/application errors, restarts, or explicitly by a user. A notification system of a device may be configured to enable designation of particular notifications to show upon shutdown. Notifications to output at shutdown may be selected based upon various criteria including but not limited to selection based on a perceived importance, notification type, particular application(s), and/or particular user contacts. When a shutdown is initiated, a check is performed to determine whether any designated notifications are available. Then, available notifications may be exposed in various ways prior to complete shutdown, such as by showing the notifications as part of a user interface(s) for the shutdown sequence.2016-02-04
20160034285Extending JAVA Application Functionality - Methods and systems for extending functions of a JAVA application. The JAVA application may call a browser to obtain the global configuration file that is indicated by a URL and load configuration information of extensions of the JAVA application based on the global configuration file. In response to a user request received by the JAVA application, the JAVA application calls a browser and processes the user request based on the loaded configuration information of extensions of the JAVA application. In implementations, the JAVA application may transmit the user request to a server. After receiving a response to the user request from the server, the JAVA application may process the response to the user request based on the loaded configuration information of extensions of the JAVA application. The implementations may respond to the user request that is beyond the preset functions of the JAVA application.2016-02-04
20160034286VEHICLE SUSPENSION AUGMENTATION DEVICES, SYSTEMS AND METHODS - Devices, systems and methods for replacing a factory installed or similar air suspension controller in a vehicle with an augmentor, which sends correct status messages to the vehicle main computer when the air suspension is replaced with coil springs or shocks. The augmentor can includes a voltage regulator, an indicator and bus interface. At power on, the program initializes the microcontroller registers, timer registers, and control registers, then loop until an inquiry or command is received, then responds with status messages that are the same as the status messages sent by the original factory installed air suspension controller until power is removed.2016-02-04
20160034287PLANNED VIRTUAL MACHINES - A planned virtual machine, for use in staging the construction of a virtual machine. Such a planned virtual machine may be used as part of a method for migrating virtual machines. The method may include creating a planned virtual machine based on a first realized virtual machine or a template, performing a configuration operation on the planned virtual machine, and converting the planned virtual machine to a second realized virtual machine. The configuration operation may comprise interaction with a virtualization platform managing the planned virtual machine and may be based on input provided by a user.2016-02-04
20160034288METHOD AND AN APPARATUS FOR CO-PROCESSOR DATA PLANE VIRTUALIZATION - A method and a system embodying the method for a data plane virtualization, comprising assigning each of at least one data plane a unique identifier; providing a request comprising an identifier of one of the at least one data plane together with an identifier of a virtual resource assigned to a guest; determining validity of the provided request in accordance with the identifier of the one of the at least one data plane and the identifier of the virtual resource assigned to the guest; and processing the request based on the determined validity of the request are disclosed.2016-02-04
20160034289COMPUTER SYSTEM AND PROCESSING METHOD OF THE SAME - A computer system including a peripheral equipment and a blade server provided with a plurality of blades, which are physical machines, and a plurality of virtual machines available on the blades, a same OS identifier is allocated, before and after the migration, to an OS that migrates along with migration of the virtual machine, migrates among the plurality of virtual machines or migrates between the virtual machine and the blade, and log of the blades and/or the virtual machines, and log of the peripheral equipment are stored in association with the OS identifier.2016-02-04
20160034290DYNAMICALLY DEPLOYED VIRTUAL MACHINE - A virtual machine data handling system includes a data handling system, a hypervisor, and a dynamically deployed virtual machine. The data handling system includes a plurality of physical computing resources (e.g., a processor and a memory). The hypervisor is implemented by the processor and the memory and deploys virtual machines from a master image. The dynamically deployed virtual machine is initially deployed by the hypervisor as a Linked Clone of the master image. The dynamically deployed virtual machine is subsequently dynamically deployed by the hypervisor copying a plurality of virtual memory segments from the master image until the dynamically deployed virtual machine is an independent Full Clone of the master image. The hypervisor may copy the plurality of virtual memory segments from the master image if at least one of the physical resources is operating below a utilization threshold.2016-02-04
20160034291SYSTEM ON A CHIP AND METHOD FOR A CONTROLLER SUPPORTED VIRTUAL MACHINE MONITOR - A system on a chip comprising: a first communication controller; at least one second communication controller operably coupled to the first communication controller; at least one processing core operably coupled to the first communication controller and arranged to support software running on a first partition and a second partition; and a virtual machine monitor located between the first and second partitions, and the at least one processing core and arranged to support communications there between. The first communication controller is arranged to: generate or receive at least one data frame; and communicate the at least one data frame to the at least one second communication controller; such that the at least one second communication controller is capable of routing the at least one data frame to the second partition bypassing the virtual machine monitor.2016-02-04
20160034292MONITORING AND DYNAMICALLY RECONFIGURING VIRTUAL MACHINE PATTERNS - A cloud manager monitors running VM patterns, determines potential VM patterns that have a different configuration than the running VM patterns, and performs estimates of a plurality of metrics for the potential VM patterns. When the estimates for the potential VM patterns exceed the monitored VM patterns currently running by some threshold amount, the potential VM patterns may be automatically deployed to one or more clouds. The result is a cloud-based system that is automatically and dynamically tuned to changing conditions.2016-02-04
20160034293MONITORING AND DYNAMICALLY RECONFIGURING VIRTUAL MACHINE PATTERNS - A cloud manager monitors running VM patterns, determines potential VM patterns that have a different configuration than the running VM patterns, and performs estimates of a plurality of metrics for the potential VM patterns. When the estimates for the potential VM patterns exceed the monitored VM patterns currently running by some threshold amount, the potential VM patterns may be automatically deployed to one or more clouds. The result is a cloud-based system that is automatically and dynamically tuned to changing conditions.2016-02-04
20160034294DYNAMICALLY DEPLOYED VIRTUAL MACHINE - A virtual machine data handling system includes a data handling system, a hypervisor, and a dynamically deployed virtual machine. The data handling system includes a plurality of physical computing resources (e.g., a processor and a memory). The hypervisor is implemented by the processor and the memory and deploys virtual machines from a master image. The dynamically deployed virtual machine is initially deployed by the hypervisor as a Linked Clone of the master image. The dynamically deployed virtual machine is subsequently dynamically deployed by the hypervisor copying a plurality of virtual memory segments from the master image until the dynamically deployed virtual machine is an independent Full Clone of the master image. The hypervisor may copy the plurality of virtual memory segments from the master image if at least one of the physical resources is operating below a utilization threshold.2016-02-04
20160034295HYPERVISOR-HOSTED VIRTUAL MACHINE FORENSICS - A computer system acquires forensics data from running virtual machines in a hypervisor-hosted virtualization environment. The computer system provides a forensics partition as an additional root virtual machine partition or child virtual machine partition. The forensics partition includes a forensics service application programming interface configured to target one or more virtual machines and acquire forensics data from a targeted virtual machine running in a particular child virtual machine partition. The forensics service application programming interface is configured to communicate via one or more inter-partition communication mechanisms such as an inter-partition communication bus, a hyercall interface, or forensics switch implemented by the hypervisor-hosted virtualization environment. The forensics service application programming interface can be exposed to a forensics tool as part of a cloud-based forensics service.2016-02-04
20160034296METHODS AND APPARATUS FOR PROVIDING HYPERVISOR LEVEL DATA SERVICES FOR SERVER VIRTUALIZATION - A system for cloud-based data services for multiple enterprises, including a plurality of cloud hypervisors that cooperatively provide cloud-based services to multiple enterprises, each hypervisor including a plurality of cloud virtual servers, each cloud virtual server being associated with an enterprise, at least one cloud virtual disk that is read from and written to by the at least one virtual server, each cloud virtual disk being associated with an enterprise, and a virtual data services appliance, which provides cloud-based data services, and multiple data services managers, one data services manager per respective enterprise, each of which coordinates the respective virtual data services appliances for those cloud hypervisors that service its corresponding enterprise.2016-02-04
20160034297SYSTEMS AND METHODS FOR MODIFYING AN OPERATING SYSTEM FOR A VIRTUAL MACHINE - Systems, methods, and software are described herein for operating a data management system, including executing an attached application and application data on a first virtual machine running a first operating system, separating the attached application and application data from the first virtual machine, and dynamically attaching the application and application data to a second virtual machine running an updated version of the first operating system.2016-02-04
20160034298AUTHENTICATION OF VIRTUAL MACHINE IMAGES USING DIGITAL CERTIFICATES - A vendor of virtual machine images accesses a virtual computer system service to upload a digitally signed virtual machine image to a data store usable by customers of the virtual computer system service to select an image for creating a virtual machine instance. If a digital certificate is uploaded along with the virtual machine image, the virtual computer system service may determine whether the digital certificate has been trusted for use. If the digital certificate has been trusted for use, the virtual computer system service may use a public cryptographic key to decrypt a hash signature included with the image to obtain a first hash value. The service may additionally apply a hash function to the image itself to obtain a second hash value. If the two hash values match, then the virtual machine image may be deemed to be authentic.2016-02-04
20160034299MAINTAINING HARDWARE RESOURCE BANDWITH QUALITY-OF-SERVICE VIA HARDWARE COUNTER - Each time a currently scheduled virtual machine (VM) accesses a hardware resource over a bus for the hardware resource via the currently scheduled VM running on a processor, a hardware component adjusts a bandwidth counter associated with usage of the bus for the hardware resource, without involvement of the currently scheduled VM or a hypervisor managing the currently scheduled VM. Responsive to the bandwidth counter reaching a threshold value, the hardware component issues an interrupt for handling by the hypervisor to maintain bandwidth quality-of-service (QoS) of bus bandwidth related to the hardware resource. Upon expiration of a regular time interval prior to the bandwidth counter reaching the threshold value, the hardware component resets the bandwidth counter to a predetermined value associated with the currently scheduled VM, without involvement of the currently scheduled VM or the hypervisor; the hardware component does not issue an interrupt. The hardware resource can be memory.2016-02-04
20160034300INFORMATION PROCESSING DEVICING AND METHOD - An information processing device includes a processor that executes a process. The process includes: identifying a cause of a shift from non-privileged mode to privileged mode that has occurred in processing by a guest program in an upper level virtual machine in a nested virtualization environment in which a first level virtual machine monitor operates in privileged mode, and an upper level virtual machine monitor and the guest program operate in non-privileged mode; and when the identified cause is setting or updating a virtual translation table employed in a virtual translation mechanism provided to the guest program by virtualizing an address translation mechanism for hardware that uses a set translation table to translate addresses of DMA by an input/output device assigned to the upper level virtual machine, setting the translation table employed by the translation mechanism based on a correspondence relationship between guest memory space and host memory space.2016-02-04
20160034301Identifying Performance Bottleneck of Transaction in Transaction Processing System - A mechanism is provided for identifying a performance bottleneck of a transaction in a transaction processing system. At a predefined time point, status information of an interaction between the transaction and a processing component among one or more processing components in the transaction processing system is collected. A duration of the interaction on the basis of the status information is determined. In response to the duration exceeding a predefined threshold, the interaction is identified as the performance bottleneck of the transaction in order to make changes to the transaction processing system thereby improving performance.2016-02-04
20160034302VIRTUAL MACHINE MIGRATION TOOL - Tools and techniques for migrating applications to compute clouds are described herein. A tool may be used to migrate any arbitrary application to a specific implementation of a compute cloud. The tool may use a library of migration rules, apply the rules to a selected application, and in the process generate migration output. The migration output may be advisory information, revised code, patches, or the like. There may be different sets of rules for different cloud compute platforms, allowing the application to be migrated to different clouds. The rules may describe a wide range of application features and corresponding corrective actions for migrating the application. Rules may specify semantic behavior of the application, code or calls, storage, database instances, interactions with databases, operating systems hosting the application, and others.2016-02-04
20160034303CACHE MOBILITY - A method and system of selecting and migrating relevant data from among data associated with a workload of a virtual machine and stored in source storage cache memory in a dynamic computing environment is described. The method includes selecting one or more policies, the one or more policies including a size policy defining a default maximum size for the relevant data. The method also includes selecting the relevant data from among the data based on the one or more policies in a default mode, and migrating the relevant data from the source storage cache memory to target storage cache memory.2016-02-04
20160034304DEPENDENCE TRACKING BY SKIPPING IN USER MODE QUEUES - A system and methods embodying some aspects of the present embodiments for maintaining compact in-order queues are provided. The queue management method includes requesting a work pointer from a primary queue, wherein the work pointer points to a work assignment comprising an indirect queue and a dependency list; responsive to the dependency list not being cleared, invalidating the work pointer in the primary queue and adding a new pointer to the end of the primary queue, the new pointer configured to point to the work assignment; and responsive to the dependency list being clear, removing the work pointer from the primary queue and performing work in the indirect queue.2016-02-04
20160034305METHODS AND SYSTEMS FOR PURPOSEFUL COMPUTING - A system, method, and computer-readable storage medium configured to facilitate user purpose in a computing architecture.2016-02-04
20160034306METHOD AND SYSTEM FOR A GRAPH BASED VIDEO STREAMING PLATFORM - A method implemented in an electronic device serving as an orchestrator managing video and audio stream processing of a streaming platform system is disclosed. The method includes the electronic device receiving a request to process a video source and creating a task graph based on the request, where the task graph is a directed acyclic graph of tasks for processing the video source, where each node of the task graph represents a processing task, and where each edge of the task graph represents a data flow across two processing tasks and corresponding input and output of each processing task. The method also includes the electronic device estimating resource requirements of each processing tasks, and splitting the task graph into a plurality of subsets, wherein each subset corresponds to a task group to be executed by one or more workers of a plurality of processing units of the streaming platform system.2016-02-04
20160034307MODIFYING A FLOW OF OPERATIONS TO BE EXECUTED IN A PLURALITY OF EXECUTION ENVIRONMENTS - A flow of operations is to be executed in a plurality of execution environments according to a distribution. In response to determining that the distribution is unable to achieve at least one criterion, the distribution is modified according to at least one policy that specifies at least one action to apply to the flow of operations in response to a corresponding at least one condition relating to a characteristic of the flow of operations.2016-02-04
20160034308BACKGROUND TASK RESOURCE CONTROL - Among other things, one or more techniques and/or systems are provided for controlling resource access for background tasks. For example, a background task created by an application may utilize a resource (e.g., CPU cycles, bandwidth usage, etc.) by consuming resource allotment units from an application resource pool. Once the application resource pool is exhausted, the background task is generally restricted from utilizing the resource. However, the background task may also utilize global resource allotment units from a global resource pool shared by a plurality of applications to access the resource. Once the global resource pool is exhausted, unless the background task is a guaranteed background task which can consume resources regardless of resource allotment states of resource pools, the background task may be restricted from utilizing the resource until global resource allotment units within the global resource pool and/or resource allotment units within the application resource pool are replenished.2016-02-04
20160034309SYSTEM AND METHOD FOR CONTEXT-AWARE ADAPTIVE COMPUTING - The present disclosure relates to systems and methods for context-aware adaptive computing. In one embodiment, the present disclosure includes a method comprising receiving a request at a first information handling system (IHS) to perform an application computation. The method also includes determining a user's context, the user operating the first IHS, and ascertaining a battery state of the first IHS. The method further includes allocating the application computation between the first IHS and a second IHS based at least on the user's context and the battery state of the first IHS. The present disclosure also includes associated systems and apparatuses.2016-02-04
20160034310JOB ASSIGNMENT IN A MULTI-CORE PROCESSOR - Technologies are generally described for methods and systems effective to assign a job to be executed in a multi-core processor that includes a first set of cores with a first size and a second set of cores with a second size different from the first size. The multi-core processor may receive the job at an arrival time and may determine a job arrival rate based on the arrival time. The job arrival rate may indicate a frequency that the multi-core processor receives a plurality of jobs. The multi-core processor may select the first set of cores and may select a degree of parallelism based on the job arrival rate and based on a performance metric relating to execution of the job on the first set of cores. In response to the selection, the multi-core processor may assign the job to be executed on the first set of cores.2016-02-04
20160034311TRACKING LARGE NUMBERS OF MOVING OBJECTS IN AN EVENT PROCESSING SYSTEM - Techniques for tracking large numbers of moving objects in an event processing system are provided. An input event stream can be received, where the events in the input event stream represent the movement of a plurality of geometries or objects. The input event stream can then be partitioned among a number of processing nodes of the event processing system, thereby enabling parallel processing of one or more continuous queries for tracking the objects. The partitioning can be performed such that each processing node is configured to track objects in a predefined spatial region, and the spatial regions for at least two nodes overlap. This overlapping window enables a single node to find, e.g., all of the objects within a particular distance of a target object, even if the target object is in the process of moving from the region of that node to the overlapping region of another node.2016-02-04
20160034312EMPIRICAL DETERMINATION OF ADAPTER AFFINITY IN HIGH PERFORMANCE COMPUTING (HPC) ENVIRONMENT - A method, apparatus and program product utilize an empirical approach to determine the locations of one or more IO adapters in an HPC environment. Performance tests may be run using a plurality of candidate mappings that map IO adapters to various locations in the HPC environment, and based upon the results of such testing, speculative adapter affinity information may be generated that assigns one or more IO adapters to one or more locations to optimize adapter affinity performance for subsequently-executed tasks.2016-02-04
20160034313EMPIRICAL DETERMINATION OF ADAPTER AFFINITY IN HIGH PERFORMANCE COMPUTING (HPC) ENVIRONMENT - A method, apparatus and program product utilize an empirical approach to determine the locations of one or more IO adapters in an HPC environment. Performance tests may be run using a plurality of candidate mappings that map IO adapters to various locations in the HPC environment, and based upon the results of such testing, speculative adapter affinity information may be generated that assigns one or more IO adapters to one or more locations to optimize adapter affinity performance for subsequently-executed tasks.2016-02-04
20160034314METHOD OF COMPUTING LATEST START TIMES TO ALLOW REAL-TIME PROCESS OVERRUNS - A method is provided for allowing process overruns while guaranteeing satisfaction of various timing constraints. At least one latest start time for an uncompleted process is computed. If an uncompleted process does not start at its latest start time, then at least one of the predetermined constraints may not be satisfied. A timer is programmed to interrupt a currently executing process at a latest start time. In another embodiment, information about ordering of the end times of the process time slots in a pre-run-time schedule is used by a run-time scheduler to schedule process executions. Exclusion relations can be used to prevent simultaneous access to shared resources. Any process that does not exclude a particular process is able to preempt that particular process at any appropriate time at run-time, which increases the chances that a process will be able to overrun while guaranteeing satisfaction of various timing constraints.2016-02-04
20160034315INFORMATION PROCESSING SYSTEM, DEPLOYMENT METHOD, PROCESSING DEVICE, AND DEPLOYMENT DEVICE - An objective of the present invention is to construct a system in which a plurality of software components having dependencies are deployed dispersedly on a plurality of processing devices.2016-02-04
20160034316TIME-VARIANT SCHEDULING OF AFFINITY GROUPS ON A MULTI-CORE PROCESSOR - Methods and systems for scheduling applications on a multi-core processor are disclosed, which may be based on association of processor cores, application execution environments, and authorizations that permits efficient and practical means to utilize the simultaneous execution capabilities provided by multi-core processors. The algorithm may support definition and scheduling of variable associations between cores and applications (i.e., multiple associations can be defined so that the cores an application is scheduled on can vary over time as well as what other applications are also assigned to the same cores as part of an association). The algorithm may include specification and control of scheduling activities, permitting preservation of some execution capabilities of a multi-core processor for future growth, and permitting further evaluation of application requirements against the allocated execution capabilities.2016-02-04
20160034317MAPPING RELATIONSHIPS AMONG VIRTUAL ELEMENTS ACROSS A SYSTEM - A system for mapping relationships among virtual elements across a system includes a switch and a server having a virtualized network interface controller (vNIC) with a plurality of vNIC links connected to the switch. The system also includes a virtual relationship module configured to: identify relationships between physical ports on the switch and virtual ports on the switch; for each vNIC link, identify local area network (LAN) interface information on the server; create data structures establishing topology information between the switch and the server; and create a mapping of each vNIC link to a respective virtual port on the switch by correlating the topology information with the LAN interface information.2016-02-04
20160034318SYSTEM AND METHOD FOR STAGING IN A CLOUD ENVIRONMENT - A method and system for staging in a cloud environment defines a default stage for integration flows. An integration flow is defined by (a) stages including (i) a live stage to follow the default stage, (ii) additional stages between the default and live stages, and (b) endpoint definitions for the live and additional stages. In response to an instruction to promote the integration flow, the integration flow is load balanced by allocating each stage to execution environment(s). Then, the integration flow is run in the execution environment(s). The load balancing includes, for each stage, (i) retrieving a list of execution environments which are available for execution of stages, (ii) selecting execution environment(s) on which to execute the stage and updating the list of available execution environments to indicate that the selected execution environment(s) is allocated, and (iii) storing the selected execution environment(s) as specific to the stage.2016-02-04
20160034319DISTRIBUTED TASKS FOR RETRIEVING SUPPLEMENTAL JOB INFORMATION - A method to assist with processing distributed jobs by retrieving and/or synchronizing supplemental job data. The method includes receiving a request to perform a job and opening a first connection (e.g., persistent connection) between a primary machine and a secondary machine, and transmitting by the primary machine a request pertaining to the job to the secondary machine using a second connection, the job to be performed by the secondary machine. The method also includes receiving by the primary machine using the second connection a task request for supplemental information pertaining to the job, transmitting by the primary machine a task response including the supplemental information to the secondary machine, and receiving a job result for the job using the second connection.2016-02-04
20160034320Virtual Application Extension Points - A virtual application may be configured with several extension points within a host operating system. The virtual application may be configured with a private namespace in which various components, such as registry settings, dynamic linked libraries, and other components may reside. During configuration, links may be placed in the host operating system that may point to objects in the virtual application's private namespace so that the operating system and other applications may launch, control, or otherwise interact with the virtual application. The links may be located in a file system, registry, or other locations and may be available to other applications, including other virtual applications. A configuration routine may place the links into the host operating system at the time the application may be configured.2016-02-04
20160034321TRACKING A RELATIVE ARRIVAL ORDER OF EVENTS BEING STORED IN MULTIPLE QUEUES USING A COUNTER USING MOST SIGNIFICANT BIT VALUES - An order controller stores each received event in a separate entry in one of at least two queues with a separate counter value set from an arrival order counter at the time of storage, wherein the arrival order counter is incremented after storage of each of the received events and on overflow the arrival order counter wraps back to zero. The order controller calculates an exclusive OR value of a first top bit of a first counter for a first queue from among the at least two queues and a second top bit of a second counter for a second queue from among the at least two queues. The order controller compares the exclusive OR value with a comparator bit to determine whether a first counter value in the first counter was stored before a second counter value in the second counter2016-02-04
20160034322SYSTEMS AND METHODS FOR EVENT ROUTING AND CORRELATION - A collaboration environment provides a generic event distributing framework that can distribute both synchronous and asynchronous events. The distributed events may be pre-defined or dynamically defined. Further, the framework can support multiple data formats for the event payload. The collaboration environment relies on two separate APIs to separate event producers from event consumers.2016-02-04
20160034323CHARACTERIZING RELATIONSHIPS AMONG SPATIO-TEMPORAL EVENTS - A method of characterizing relationships among spatio-temporal events and a system to characterize the relationships are described. The method includes receiving information specifying the spatio-temporal events and associated categories from one or more sources. The method also includes building, using a processor, a directed acyclic graph (DAG) indicating a relationship among the categories for each of two or more space lag (SL) and time lag (TL) sets. Each of the two or more SL and TL sets defines a spatio-temporal boundary such that only the spatio-temporal events and the associated categories with (SL,TL)-neighborhoods inside the respective spatio-temporal boundary are considered in building the respective DAG. The respective (SL,TL)-neighborhood of each of the spatio-temporal events is a polygonal shape defined by the respective SL and the respective TL and the respective (SL,TL)-neighborhood of each of the categories is a union of the (SL,TL)-neighborhoods of the associated spatio-temporal events.2016-02-04
20160034324TRACKING A RELATIVE ARRIVAL ORDER OF EVENTS BEING STORED IN MULTIPLE QUEUES USING A COUNTER USING MOST SIGNIFICANT BIT VALUES - An order controller stores each received event in a separate entry in one of at least two queues with a separate counter value set from an arrival order counter at the time of storage, wherein the arrival order counter is incremented after storage of each of the received events and on overflow the arrival order counter wraps back to zero. The order controller calculates an exclusive OR value of a first top bit of a first counter for a first queue from among the at least two queues and a second top bit of a second counter for a second queue from among the at least two queues. The order controller compares the exclusive OR value with a comparator bit to determine whether a first counter value in the first counter was stored before a second counter value in the second counter2016-02-04
20160034325MOVE OF OBJECT BETWEEN PAGES OF EDITABLE DOCUMENT - Embodiments relate to moving an object between pages of an editable document. An aspect includes determining that the object in an edit page of the editable document has been dragged to a target page thumbnail of the editable document. Another aspect include zooming in the target page thumbnail. Yet another aspect includes moving the object in the zoomed-in target page thumbnail.2016-02-04
20160034326MONITORING A BUSINESS TRANSACTION UTILIZING PHP ENGINES - An agent executing on a server identifies a function provided from a PHP library and executed by a PHP server and monitors the function. The present system places an interceptor on a first function in order to the identity of a second function. The second function may be identified from the first function return value from the route object, argument, PHP program state, or some other part of the execution environment at the time the first function is intercepted. From the data analyzed at the time the first function is intercepted, the present system identifies the second function which is also modified with an interceptor. The second function is monitored via the interceptor to determine performance and is associated with a business transaction.2016-02-04
20160034327METHOD OF OPERATING NON-VOLATILE MEMORY DEVICE - A method of operating a non-volatile memory device including first buffer memory cells and main memory cells, where the first buffer memory cells store first data, the main memory cells store second data, which is read from the first buffer memory cells, or recovered first data, which is recovered from the second data through a correction process, includes reading data, which is stored in sample buffer memory cells included in the first buffer memory cells, as sample data when an accumulated number of read commands, which are executed on the non-volatile memory device, reaches a reference value. The method includes counting the number of errors included in the sample data based an error correction code, and determining whether the main memory cells store the second data or the recovered first data based on the number of the errors relative to the first threshold value.2016-02-04
20160034328SYSTEMS AND METHODS FOR SPATIALLY DISPLACED CORRELATION FOR DETECTING VALUE RANGES OF TRANSIENT CORRELATION IN MACHINE DATA OF ENTERPRISE SYSTEMS - Aspects of the present disclosure include systems and/or methods for detecting ranges of data that represent transient correlations in machine data corresponding to various hardware and/or software systems, such as enterprise systems employed by an information technology (“IT”) organization. In various aspects, the machine data may comprise one or more operational metrics that represent system performance, usage, and/or business activity of the enterprise system. The operational metrics may be used to identify operational issues within the enterprise system.2016-02-04
20160034329CORRELATION AND PREDICTION ANALYSIS OF COLLECTED DATA - Embodiments are directed towards a collection computer that automatically detects and monitors each of a plurality of sensors that are currently providing real-time data regarding characteristics of a first machine. A pattern in the provided real-time data may be determined based on a comparison to another pattern from previously provided real-time data from other sensors associated with a second machine. The comparison is employed to identify an event that previously occurred at the second machine, where a positive comparison may be employed as a prediction that the event corresponding to the second machine is about to happen to the first machine. Based on this prediction, an alert may be provided to at least one user of the first machine. The alert may include information, such as a component has failed, is currently failing, or is about to fail.2016-02-04
20160034330INFORMATION-PROCESSING DEVICE AND METHOD - According to one embodiment, there is provided an information-processing device which includes a storage medium and a controller configured to acquire a delay time in access to storage area included in the storage medium for every storage area with reference to a time at which an access is performed without performing retrying on the storage area based on first information relating to an access history with respect to the storage area, and to determine the storage area of which the delay time exceeds a predetermined allowable delay time as a defective area.2016-02-04
20160034331MEMORY SYSTEM AND DATA PROTECTION METHOD THEREOF - A memory system includes an abnormality detecting block including a plurality of abnormality detectors to detect whether an abnormal condition has occurred during a normal operation due to an external attack. An abnormality processing block is configured to process the abnormal condition in hardware, and a central processing unit is configured to execute a first process to detect whether the abnormal condition has occurred during the normal operation and to execute a second process to process the abnormal condition in software. A monitoring unit is configured to monitor an operation of the second process and to determine whether an error has occurred in the second process based on a monitoring result.2016-02-04
20160034332INFORMATION PROCESSING SYSTEM AND METHOD - An information processing system includes: a first system that includes a group of arithmetic units, a controller, and an external device; and a second system configured to execute calculation which is the same as calculation executed in the first system and compare calculation results to each other, wherein the controller is configured to: stop a plurality of arithmetic units when it is detected that an output request to the external device is output from one or more arithmetic units among the plurality of arithmetic units that execute first calculation in the group of arithmetic units, the plurality of arithmetic units including one or more arithmetic units that does not output the output request, transmit first comparison target data including a value output in response to the output request to the second system, and instruct the stopped one or more arithmetic units to execute second calculation.2016-02-04
20160034333POWER SUPPLY DEVICE, CONTROLLER THEREOF, METHOD OF CONTROLLING THE SAME, AND ELECTRONIC DEVICE EMPLOYING THE SAME - A controller, which is installed on a power supply device complying with the USB (Universal Serial Bus)-PD (power delivery) specification and controls a power supply circuit for supplying a bus voltage to a power receiving device via a bus line is disclosed. The controller includes an interface circuit, which communicates with the power supply device via the bus line; a processor, which transmits and receives messages to and from the power receiving device by using the interface circuit, determines a voltage level of the bus voltage, and sets the determined voltage level to the power supply circuit; and a watchdog timer, which is cleared whenever the processor executes a ping-related command for transmission or reception of ping messages to or from the power receiving device, wherein an overflow period of the watchdog timer is set to be longer than a period for the ping messages.2016-02-04
20160034334VISUAL TOOLS FOR FAILURE ANALYSIS IN DISTRIBUTED SYSTEMS - Visual tools are provided for failure analysis in distributed systems. Errors from synthetic measurements and usage data associated with a cloud based service are aggregated by a management application. The errors are processed to create a distribution that segments the errors based on components of the cloud based service. A failed component that generates a subset of the errors associated with a failure is highlighted. The failed component is one of the components of the cloud based service. The distribution is provided in a visualization to identify the failure by emphasizing the failed component with a failure information in proximity to the failed component.2016-02-04
20160034335INFORMATION PROCESSING DEVICE - An information processing device includes: a virtual machine built in the information processing device and able to use a physical device included by the information processing device; and an information processing device failure managing unit for detecting a failure in the information processing device. The virtual machine includes: a virtual machine failure managing unit for detecting a failure in the physical device which the virtual machine can use; and a failure notifying unit for notifying occurrence of a failure in the physical device detected by the virtual machine failure managing unit to the information processing device failure managing unit.2016-02-04
20160034336UNCORRECTABLE MEMORY ERRORS IN PIPELINED CPUS - Uncorrectable memory errors in pipelined central processing units. A processor core may be connected to a memory system and it may include a processor cache. In response to determining an uncorrectable error in data stored in the memory system, the address of a memory location of the uncorrectable error is stored in an address buffer and a recovery procedure is performed for the processor core. When fetching data from a memory location and if it is determined that the address of this memory location is stored in the address buffer, the content of a cache line related to the address is moved into a quarantine buffer of the processor core. When detecting an error in the data of the moved cache line, a repair procedure for the data of this address is triggered.2016-02-04
20160034337Failure Mode Identification and Reporting - When a software component is starting, such as but not limited to a task or a subtask, the component pushes its identification (ID) onto a stack. The component executes its other instructions. If the component completes its instructions so that it can terminate normally, it pops the stack, which removes its ID from the stack. If the component fails, such as by not being able to complete its instructions, it will not be able to pop the stack so its ID will remain in the stack. Another software process can read the IDs in the stack to identify which components have failed and can automatically take a specified action, such as by sending an email message to, sending a text message to, or calling by telephone, a person or persons responsible for that software component.2016-02-04
20160034338SEQUENTIAL CIRCUIT WITH ERROR DETECTION - Sequential circuits with error-detection are provided. They may, for example, be used to replace traditional master-slave flip-flops, e.g., in critical path circuits to detect and initiate correction of late transitions at the input of the sequential. In some embodiments, such sequentials may comprise a transition detector with a time borrowing latch.2016-02-04
20160034339Error Recovery Within Integrated Circuit - An integrated circuit includes one or more portions having error detection and error correction circuits and which is operated with operating parameters giving finite non-zero error rate as well as one or more portions formed and operated to provide a zero error rate.2016-02-04
20160034340APPARATUSES AND METHODS FOR FIXING A LOGIC LEVEL OF AN INTERNAL SIGNAL LINE - An apparatus includes a first external terminal, a first circuit, a signal line and a second circuit, The first external terminal receives at least one of data mask information and data bus inversion information. The first circuit performs one of an error check operation and as data bus invasion operation. The signal line is coupled between the first external terminal and the first circuit. The second circuit is coupled to the signal line and first a voltage level of the signal line at a substantially constant level responsive to a first control signal.2016-02-04
20160034341ORPHAN BLOCK MANAGEMENT IN NON-VOLATILE MEMORY DEVICES - A system for data storage includes one or more non-volatile memory (NVM) devices, each device including multiple memory blocks, and a processor. The processor is configured to assign the memory blocks into groups, to apply a redundant data storage scheme in each of the groups, to identify a group of the memory blocks including at least one bad block that renders remaining memory blocks in the group orphan blocks, to select a type of data suitable for storage in the orphan blocks, and to store the data of the identified type in the orphan blocks.2016-02-04
20160034342OPERATIONAL VIBRATION COMPENSATION THROUGH MEDIA CACHE MANAGEMENT - Apparatus and method for managing a media cache through the monitoring of operational vibration of a data storage device. In some embodiments, a non-volatile media cache of the data storage device is partitioned into at least first and second zones having different data recording characteristics. Input data are received for storage in a non-volatile main memory of the data storage device. An amount of operational vibration associated with the data storage device is measured. The input data are stored in a selected one of the first or second zones of the media cache prior to transfer to the main memory responsive to a comparison of the measured amount of operational vibration to a predetermined operational vibration threshold.2016-02-04
20160034343DATA STORING METHOD, MEMORY CONTROL CIRCUIT UNIT AND MEMORY STORAGE DEVICE - A data storing method, a memory control circuit unit and a memory storage device are provided. The method includes: generating a parity according to first data. The method also includes: when programming the first data into first physical programming unit, programming at least one mark into redundancy bit area of the first physical programming unit. The method further includes: programming the parity into at least one second physical programming unit arranged after the first physical programming unit, and the at least one mark indicates that the parity is programmed into the at least one second physical programming unit.2016-02-04
20160034344Error Repair Location Cache - A method for repairing a memory includes executing an Error Correction Code (ECC) for a page of the memory. The page includes a plurality of bits having an inherent number of failed bits equal to or greater than zero. The ECC is configured to correct a correctable number of failed bits from the plurality of bits. A location of a failure prone bit in the page is determined from a cache in response to the correctable number of failed bits being less than the inherent number of failed bits. A state of the failure prone bit is changed to a new state in response to determining the location of the failure prone bit. The ECC is executed in response to the state of the failure prone bit being changed to the new state.2016-02-04
20160034345MEMORY LATENCY MANAGEMENT - Apparatus, systems, and methods to manage memory latency operations are described. In one embodiment, an electronic device comprises a processor and a memory control logic to receive data from a remote memory device, store the data in a local cache memory, receive an error correction code indicator associated with the data, and implement a data management policy in response to the error correction code indicator. Other embodiments are also disclosed and claimed.2016-02-04
20160034346NAND Flash Memory Having Internal ECC Processing and Method of Operation Thereof - A continuous read operation may be achieved by using a data buffer having a partitioned data register and a partitioned cache register, user configurable internal ECC associated with the cache register, and fast bad block management. During a data read operation, the ECC status may be indicated by ECC status bits. The status (1:1), for example, may indicate for the Continuous Read Mode that the entire data output contains more than 4 bits errors/page in multiple pages. However, one may wish to know the ECC status of each page or of each page partition. For the former, the ECC status for the entire page may be determined and made in the status register at the end of the output of the page. For the latter, the ECC status of each page partition may be determined and output before output of the corresponding page partition.2016-02-04
20160034347MEMORY SYSTEM - According to one embodiment, a memory system includes a nonvolatile memory and a controller. The nonvolatile memory includes first number of storage areas. The first number is two or more. The controller generates a plurality of second data by encoding a plurality of first data. The controller writes each piece of the second data to any one of the first number of storage areas. The controller successively repeats parallel read processing to read each piece of third data from the storage area. The parallel read processing is processing for reading, in parallel, a piece of third data from each of a second number of storage areas among the first number of storage areas. The second number is two or more. The controller determines a write destination of each piece of second data so that the second number becomes uniform for each parallel read processing.2016-02-04
20160034348SEMICONDUCTOR MEMORY DEVICE HAVING SELECTIVE ECC FUNCTION - A semiconductor memory device having a selective error correction code (ECC) function is provided. The semiconductor memory device divides a memory cell array into blocks according to data retention characteristics of memory cells. A block in which there are a plurality of fail cells generated at a refresh rate of a refresh cycle that is longer than a refresh cycle defined by the standards of the semiconductor device is selected from among the divided blocks. The selected block repairs the fail cells by performing the ECC function. The other blocks repair the fail cells by using redundancy cells. Accordingly, a refresh operation is performed on the memory cells of the memory cell array at the refresh rate of the refresh cycle that is longer than the refresh cycle by the standards of the semiconductor device.2016-02-04
20160034349OPERATING METHOD OF MEMORY CONTROLLER AND NONVOLATILE MEMORY DEVICE - A method of operating a nonvolatile memory device including a plurality of memory cells is provided. A default read operation is performed on a page using a default read voltage set to generate default raw data. If error bits of the default raw data are not corrected, a plurality of low-level read operations is performed on the page using a plurality of read voltage sets to generate a plurality of low-level raw data. Each read voltage set is different from the default voltage set. A read voltage set is selected from the plurality of read voltage sets as a starting voltage set, according to each low-level raw data. A high-level read operation using the selected starting voltage set is performed on the page to generate high-level raw data.2016-02-04
20160034350ADAPTIVE ERROR CORRECTION IN A MEMORY SYSTEM - According to one aspect, a method for adaptive error correction in a memory system includes reading data from a memory array of a non-volatile memory device in the memory system. Error correcting logic checks the data for at least one error condition stored in the memory array. Based on determining that the at least one error condition exists, a write-back indicator is asserted by the error correcting logic to request correction of the at least one error condition. Based on determining that the at least one error condition does not exist, accesses of the memory array continue without asserting the write-back indicator.2016-02-04
20160034351Apparatus and Method for Programming ECC-Enabled NAND Flash Memory - The NAND flash memory array in a memory device may be programmed using a cache program execute technique for fast performance. The memory device includes a page buffer, which may be implemented as a cache register and a data register. Program data may be loaded to the cache register, where it may be processed by an error correction code (“ECC”) circuit. Thereafter, the ECC processed data in the cache register may be replicated to the data register and used to program the NAND flash memory array. Advantageously, immediately after the ECC processed data in the cache register is replicated to the data register, the cache register may be made available for other operations. Of particular benefit is that a second page of program data may be loaded into the cache register and ECC processed while the first page of program data is being programmed into the NAND flash memory array.2016-02-04
20160034352NAND Flash Memory Having an Enhanced Buffer Read Capability and Method of Operation Thereof - A page buffer suitable for continuous page read may be implemented with a partitioned data register, a partitioned cache register, and a suitable ECC circuit. The partitioned data register, partitioned cache register, and associated ECC circuit may also be used to realize a substantial improvement in the page read operation by using a modified Page Data Read instruction and/or a Buffer Read instruction, including in some implementations the use of a partition busy bit.2016-02-04
20160034353Storage Module and Method for Improved Error Correction by Detection of Grown Bad Bit Lines - A storage module and method are provided for improved error correction by detection of grown bad bit lines. In one embodiment, a storage module is provided comprising a controller and a memory having a plurality of bit lines. The controller detects an uncorrectable error in a code word read from the memory, determines location(s) of grown bad bit line(s) that contributed to the error in the code word being uncorrectable, and uses the determined location(s) of the grown bad bit line(s) to attempt to correct the error in the code word.2016-02-04
20160034354GLOBAL ERROR RECOVERY SYSTEM - In a network storage device that includes a plurality of data storage drives, error correction and/or recovery of data stored on one of the plurality of data storage drives is performed cooperatively by the drive itself and by a storage host that is configured to manage storage in the plurality of data storage drives. When an error-correcting code (ECC) operation performed by the drive cannot correct corrupted data stored on the drive, the storage host can attempt to correct the corrupted data based on parity and user data stored on the remaining data storage drives. In some embodiments, data correction can be performed iteratively between the drive and the storage host. Furthermore, the storage host can control latency associated with error correction by selecting a particular error correction process.2016-02-04
Website © 2025 Advameg, Inc.