43rd week of 2015 patent applcation highlights part 52 |
Patent application number | Title | Published |
20150301807 | Partial Specialization of Generic Classes - Generic classes may have more than one specializable type parameter and it may be desirable to specialize one or more of the type variables while not specializing others. The result of partial specialization may be one or more additional generic classes that are further specializable on the remaining type parameters. A runtime specializer may partially specialize a generic class to produce a partially specialized class and may subsequently further specialize the partially specialized class to generate a fully specialized class. Thus, rather than performing the specialization of a generic class all at once, such as by specializing Map into Map or Map, one type parameter may be partially specialized, such as resulting in Map, and then at some later time the remaining type parameter(s) may be specialized, such as to generate Map or Map. | 2015-10-22 |
20150301808 | Manual Refinement of Specialized Classes - While a runtime specializer may always be able to generate an automated specialized version of a generic class, in some cases an alternate form of user control over specialization may allow the use of automated specialization while also adding (or overriding) specialization-specific method implementations. In general, the set of members of a generic class may not change when the class is specialized. In other words, the same members may exist in the auto-specialized version as in the generic version. However, manual refinement of specialized classes may allow a developer to hand specialize a particular (possibly a better) representation and/or implementation of one or more methods of the specialized class. | 2015-10-22 |
20150301809 | Wholesale Replacement of Specialized Classes - Wholesale replacement of specialized classes may involve the ability to replace the auto specialization of a generic class may not be used at all and instead, a completely different, hand-written, class when the class is specialized for particular type parameterizations, according to some embodiments. The replacement class may have the same interface as the generic or auto specialized version, but it may have a completely different representation and/or implementation. A runtime environment may load the alternate version of the class, based on information identifying the alternate version, whenever the particular specialization is instantiated. The runtime may not have to load the generic or auto specialized version of the class when using the alternate version of the class. | 2015-10-22 |
20150301810 | Data Processing Method and Apparatus - A data processing method and apparatus, which relate to the computer field and are capable of effectively improving scalability of a database system. The data processing method includes: receiving source code of an external routine, where the source code of the external routine is compiled by using an advanced programming language; compiling the source code to obtain intermediate code, where the intermediate code is a byte stream identifiable to a virtual machine on any operating platform; converting, according to an instruction set on the operating platform, the intermediate code into machine code capable of running on the operating platform; and storing the machine code to a database. The data processing method and apparatus provided by the embodiments of the present invention are used to process data. | 2015-10-22 |
20150301811 | METHOD AND APPARATUS FOR TESTING BROWSER COMPATIBILITY - A method and apparatus for testing browser compatibility are provided. The method includes: pre-processing source code of the webpage to determine a code type; obtaining a compatibility rule library according to the code type; conducting syntax parsing of the source code to obtain a syntax tree of the source code; and conducting static analysis of the source code based on the compatibility rule library and the syntax tree. The method and apparatus for testing browser compatibility conduct static analysis of the webpage source code to test browser compatibility, which is simple and inexpensive. | 2015-10-22 |
20150301812 | Metadata-driven Dynamic Specialization - Metadata-driven dynamic specialization may include applying a type erasure operation to a set of instruction in a generic class or to a method declaration that includes typed variables using an encoded form of an instruction or an argument to an instruction. The instruction may operate on values of the reference types and the argument may be a signature that indicates the reference types. The encoded form may be annotated to include metadata indicating which type variables have been erased and which reference types are the erasures of type variables. Additionally, the metadata may indicate that the instruction operates on values of, and that the argument indicates reference types that are erasures of, the type variables of the class (or method) declaration. Moreover, the encoded form of the instruction or argument may be used directly without specialization or transformation. | 2015-10-22 |
20150301813 | METHODS AND SYSTEMS FOR FORMING AN ADJUSTED PERFORM RANGE - One or more regions of COBOL source code having an entry point are identified. A PERFORM instruction associated with the entry point to analyze is selected. A PERFORM range for the selected PERFORM instruction is determined. An instruction that changes control flow in execution of the COBOL source code subsequent to the selected PERFORM instruction is identified. Flow-affected code resulting from the instruction is determined. An adjusted PERFORM range for the selected PERFORM instruction is formed. | 2015-10-22 |
20150301814 | APPLICATION DEPLOYMENT METHOD AND SCHEDULER - An application deployment method and a scheduler are disclosed. The method includes: receiving, by a scheduler, an application deployment request sent for a first application by a cloud controller of a first cloud; after receiving the application deployment request, sending, by the scheduler, a first query message and a second query message to a cloud controller of a second cloud, and sending a second query message to a cloud controller of a third cloud; determining, by the scheduler, a target calculation unit from at least one calculation unit that is obtained by querying by using the first query message and the second query message and that has a first calculation capability; and deploying, by the scheduler, the first application to the target calculation unit. | 2015-10-22 |
20150301815 | VIRAL DISTRIBUTION OF MOBILE APPLICATION SOFTWARE - Methods and apparatus, including computer program products, are distribution of a mobile application. In one aspect there is provided a method. The method may include storing, at a first user equipment, a mobile payment application installation package; sending, by the first user equipment via a short-range radio link, an invitation to a second user equipment, the invitation representing an offer to receive the mobile payment application installation package; receiving, at the first user equipment, a response to the invitation; and sending, by the first user equipment via the short-range radio link, the mobile payment application installation package, when the response to the invitation represents an acceptance of the offer. Related apparatus, systems, methods, and articles are also described. | 2015-10-22 |
20150301816 | SYSTEM AND METHOD FOR UPDATING NETWORK COMPUTER SYSTEMS - An update system configured to provide software updates, software patches and/or other data packets to one or more computer systems via a network is disclosed. The update system may interact with a network management system, such as an enterprise management system, to distribute data packets and gather configuration information. The update system may generate and send commands to the network management system. The network management system may carry out the commands to distribute data packets and/or gather configuration information. | 2015-10-22 |
20150301817 | NOTIFICATION SYSTEM, METHOD AND DEVICE THEREFOR - A system comprising a server in communication with a user device configured to run a plurality of applications. A data structure is held in the system, said data structure comprising multiple entries, wherein each entry associates an application pairing with a weighting, each application pairing comprising a source installation and a target installation. The user device running an application corresponding to one of the source installations selects one of the target installations to download from the server on the basis of the weightings, wherein each of the weightings is for the source installation is proportional to probability of the respective target installation being selected and installed. | 2015-10-22 |
20150301818 | SYSTEM AND METHODS FOR UPDATING SOFTWARE OF TEMPLATES OF VIRTUAL MACHINES - Disclosed are systems, methods and computer readable medium for updating software of templates of virtual machines. An example method includes determining a first coefficient indicative of a level of importance of a continuous operation of one or more virtual machines created from a virtual machine template; determining a second coefficient indicative of a level of criticality of software updates on the one or more virtual machines created from the virtual machine template; determining a third coefficient as a function of the first coefficient and the second coefficient; and when the third coefficient exceeds a threshold, updating the software on the virtual machine template to generate an updated virtual machine template. | 2015-10-22 |
20150301819 | METHOD OF MANAGING A SCHEDULE-BASED SOFTWARE PACKAGE UPDATE - A system and method of managing a vehicle software configuration includes: receiving from a user both a software package identifier for a software package that will be loaded onto a vehicle during a temporal period that is selected by the user and a vehicle identifier; identifying the software package associated with the software package identifier; wirelessly sending the software package from a central facility to the vehicle associated with the vehicle identifier for use during the user-selected temporal period; and storing the software package at the vehicle during the user-selected temporal period. | 2015-10-22 |
20150301820 | Modification of Terminal and Service Provider Machines Using an Update Server Machine - A system including a terminal machine and a service provider machine is modified using a service provider machine. The terminal machine includes a terminal application for displaying a prompt in a first sequence of prompts and accepting a user data entry in a first series of data entries. The service provider machine includes a provider application for receiving the user data entry. The update server machine sends a dialogue module including a first and second set of updated code to the terminal machine and the service provider machine, respectively. The dialogue module does not modify computer-executable instructions saved on the terminal or service provider machines. The first and second set of updated code adapts the terminal application and provider application, respectively, to use a second sequence of prompts and a second sequence of data entries. | 2015-10-22 |
20150301821 | SYSTEM AND METHOD FOR MANAGEMENT OF SOFTWARE UPDATES AT A VEHICLE COMPUTING SYSTEM - A vehicle software management system includes a transceiver configured to communicate information with a server, and a processor in communication with the transceiver. The processor may be configured to receive a file manifest from the server and transmit a list of to-be updated application file(s) based on the file manifest to the server. The processor may be further configured to receive one or more application files from the server based on the list. The processor may be further configured to flash one or more systems using the one or more application files based on at least one of destination file location, installation type, and file identification. | 2015-10-22 |
20150301822 | IN-VEHICLE PROGRAM UPDATE APPARATUS - A gateway ECU includes an update condition table that indicates a vehicle load state capable of updating an ECU program corresponding to each of several ECUs. The gateway ECU wirelessly communicates with an external center apparatus to receive an update file. The gateway ECU uses the update condition table to determine whether the current vehicle load state equals a lightly loaded state capable of updating an ECU program or a heavily loaded state incapable of updating an ECU program. If the current vehicle load state is determined to equal the lightly loaded state, the gateway ECU updates an ECU program using the update file received from the center apparatus. If the current vehicle load state is determined to equal the heavily loaded state, the gateway ECU performs environment improvement control based on the update condition table to change the current vehicle load state to the lightly loaded state. | 2015-10-22 |
20150301823 | Information Processing Device, Difference Information Generating Device, Program, And Recording Medium. - An information processing device includes: an application recording portion in which application software is recorded; a patch obtaining portion obtaining patch data from a server; a patch recording portion in which the obtained patch data is recorded; and an application executing portion executing an application using the application software and the patch data. The patch obtaining portion includes a difference information obtaining unit obtaining data block difference information between a latest patch file retained by the server and a patch file recorded in the patch recording portion, and a download executing unit downloading an updated data block from the latest patch file according to the difference information. | 2015-10-22 |
20150301824 | VERSION CONTROL OF APPLICATIONS - An application development system allows developers of software system to manage infrastructure resources during the development and testing process. The application development system allows users to define application containers that comprise components including source code, binaries, and virtual databases used for the application. An application container can be associated with policies that control various aspects of the actions taken using the application container including constraints and access control. The application development system enforces the policies for actions taken by users for the application containers. The encapsulation of policies with the application containers allows users of the application containers to take actions including creating virtual databases, provisioning virtual databases, and the like without requiring system administrators to manage resource issues. | 2015-10-22 |
20150301825 | Decomposing a Generic Class into Layers - The domain of genericity of an existing generic class may be expanded to include not just reference types, but also primitive and value types even though some members of the existing class do not support the expanded genericity. A subdivided version of the class may be created that includes a generic layer including abstract versions of class members and a reference-specific layer that including non-abstract versions of class members that are abstract in the generic layer. The subdivided version of the class may also include information that indicates to which layer a class member belongs. Problematic methods (e.g., methods that have built-in assumptions regarding the domain of genericity) may be moved into the second, reference-specific, layer, thereby retaining compatibility with classes that currently instantiate or reference those methods, while still allowing use within the expanded domain of genericity. | 2015-10-22 |
20150301826 | SHARING PROCESSING RESULTS BETWEEN DIFFERENT PROCESSING LANES OF A DATA PROCESSING APPARATUS - A data processing apparatus has control circuitry for detecting whether a first micro-operation to be processed by a first processing lane would give the same result as a second micro-operation processed by a second processing lane. if they would give the same result, then the first micro-operation is prevented from being processed by the first processing lane and the result of the second micro-operation is output as the result of the first micro-operation. This avoids duplication of processing, to save energy for example. | 2015-10-22 |
20150301827 | REUSE OF RESULTS OF BACK-TO-BACK MICRO-OPERATIONS - A data processing apparatus has control circuitry for detecting whether a current micro-operation to be processed by processing circuitry is for the same data processing operation and specifies the same at least one operand as the last valid micro-operation processed by the processing circuitry. If so, then the control circuitry prevents the processing circuitry processing the current micro-operation so that an output register is not updated in response to the current micro-operation, and outputs the current value stored in the output register as the result of the current micro-operation. This allows power consumption to be reduced or performance to be improved by not repeating the same computation. | 2015-10-22 |
20150301828 | PROCESSOR CORE ARRANGEMENT, COMPUTING SYSTEM AND METHODS FOR DESIGNING AND OPERATING A PROCESSOR CORE ARRANGEMENT - The invention relates to a method of designing a processor core arrangement which comprises a first processor core for operation at a first operation frequency and having an associated first leakage and a second processor core for operation at a second operation frequency lower than the first operation frequency and having an associated second leakage lower than the first leakage. The processor core arrangement is capable of switching from the first processor core to the second processor core and vice versa. | 2015-10-22 |
20150301829 | SYSTEMS AND METHODS FOR MANAGING BRANCH TARGET BUFFERS IN A MULTI-THREADED DATA PROCESSING SYSTEM - A data processing system includes a processor configured to execute processor instructions of a first thread and processor instructions of a second thread, a first branch target buffer (BTB) corresponding to the first thread, a second BTB corresponding to the second thread, storage circuitry configured to store a borrow enable indicator corresponding to the first thread which indicates whether borrowing is enabled for the first thread, and control circuitry configured to allocate an entry for a branch instruction executed within the first thread in the first branch target buffer but not the second branch target buffer if borrowing is not enabled by the borrow enable indicator and in the first branch target buffer or the second branch target buffer if borrowing is enabled by the borrow enable indicator and the second thread is not enabled. | 2015-10-22 |
20150301830 | PROCESSOR WITH VARIABLE PRE-FETCH THRESHOLD - A method and apparatus for controlling pre-fetching in a processor. A processor includes an execution pipeline and an instruction pre-fetch unit. The execution pipeline is configured to execute instructions. The instruction pre-fetch unit is coupled to the execution pipeline. The instruction pre-fetch unit includes instruction storage to store pre-fetched instructions, and pre-fetch control logic. The pre-fetch control logic is configured to fetch instructions from memory and store the fetched instructions in the instruction storage. The pre-fetch control logic is also configured to provide instructions stored in the instruction storage to the execution pipeline for execution. The pre-fetch control logic is further configured set a maximum number of instruction words to be pre-fetched for execution subsequent to execution of an instruction currently being executed in the instruction pipeline. The maximum number is based on a value contained in a pre-fetch threshold field of an instruction executed in the execution pipeline. | 2015-10-22 |
20150301831 | SELECT LOGIC FOR THE INSTRUCTION SCHEDULER OF A MULTI STRAND OUT-OF-ORDER PROCESSOR BASED ON DELAYED RECONSTRUCTED PROGRAM ORDER - A processing device comprises select logic to schedule a plurality of instructions for execution. The select logic calculates a reconstructed program order (RPO) value for each of a plurality of instructions that are ready to be scheduled for execution. The select logic creates an ordered list of instructions based on the delayed RPO values, the delayed RPO values comprising the calculated RPO values from a previous execution cycle, and dispatches instructions for scheduling based on the ordered list. | 2015-10-22 |
20150301832 | DYNAMICALLY ENABLED BRANCH PREDICTION - Embodiments for a processor that selectively enables and disables branch prediction are disclosed. The processor may include counters to track a number of fetched instructions, a number of branches, and a number of mispredicted branches. A misprediction threshold may be calculated dependent upon the tracked number of branches and a predefined misprediction ratio. Branch prediction may then be disabled when the number of mispredictions exceed the determined threshold value and dependent upon the branch rate. | 2015-10-22 |
20150301833 | APPARATUS AND METHOD FOR HANDLING EXCEPTION EVENTS - Processing circuitry has a plurality of exception states for handling exception events, the exception states including a base level exception state and at least one further level exception state. Each exception state has a corresponding stack pointer indicating the location within the memory of a corresponding stack data store. When the processing circuitry is in the base level exception state, stack pointer selection circuitry selects the base level stack pointer as a current stack pointer indicating a current stack data store for use by the processing circuitry. When the processing circuitry is a further exception state, the stack pointer selection circuitry selects either the base level stack pointer or the further level stack pointer corresponding to the current further level exception state as a current stack pointer. | 2015-10-22 |
20150301834 | SENSING DATA READING DEVICE AND METHOD - A sensing data reading device and method applied to an electronic device are provided. The sensing data reading device supports a first operating system and a second operating system. The sensing data reading device includes: a sensing module for generating at least a sensing data; a hub coupled to the sensing module and adapted to read at least a sensing data; and a control circuit coupled to the sensing module and the hub to read at least a sensing data directly as soon as the electronic device switches to the first operating system and send a control signal to the hub as soon as the electronic device switches to the second operating system such that the hub reads the at least a sensing data. The sensing data reading device and method dispense with a switch circuit, thereby saving circuit area and cutting costs. | 2015-10-22 |
20150301835 | DECOUPLING BACKGROUND WORK AND FOREGROUND WORK - Systems, methods, and apparatus for separately loading and managing foreground work and background work of an application. In some embodiments, a method is provided for use by an operating system executing on at least one computer. The operating system may identify at least one foreground component and at least one background component of an application, and may load the at least one foreground component for execution separately from the at least one background component. For example, the operating system may execute the at least one foreground component without executing the at least one background component. In some further embodiments, the operating system may use a specification associated with the application to identify at least one piece of computer executable code implementing the at least one background component. | 2015-10-22 |
20150301836 | DISPLAY DEVICE AND METHOD OF CONTROLLING THEREFOR - A method of controlling an operation of a display device is described. The display device includes a memory including a self-refresh memory block, a display module and a controller configured to control the operation of the display device. The controller is configured to receive a power-off signal, store a system booting file and a predetermined snapshot image in a self-refresh memory block based on a predetermined self-refresh mode, receive a power-on signal, boot a system by extracting the system booting file from the self-refresh memory block, and control the display module to display the predetermined snapshot image. A content is contiguously displayed by the display module when the power-off signal is received. An image configured by default is displayed by the display module after the power-on signal is received. A specific content is executed which is selected according to a user access frequency. | 2015-10-22 |
20150301837 | Structural Identification of Dynamically Generated, Pattern-Instantiation, Generated Classes - Structural identification of dynamically generated, pattern-instantiation classes may be utilized using structural descriptions. Instead of describing classes only by name, and using that name to locate that class, a class may be referred to by a generator function and arguments to the generator function. A structural description may specify the generator function and the parameters. In addition, a structural description of a class may be used as a parameter to a generator function specified by another structural description. A structural description may be used similarly to a class name for virtually any situation in which a class name may be used. Classes may be compared using their structural descriptions. For example, two structural descriptions may be considered to be the same class if they specify the same generator function and parameters. | 2015-10-22 |
20150301838 | System and Method for Enabling Customized Notifications on an Electronic Device - A system and method are provided for enabling customized notifications on an electronic device. The method comprises displaying a recommendation on the electronic device to create a custom notification for at least one communication type. The method also comprises enabling the custom notification to be created for the at least one communication type. The recommendation may be determined using usage data associated with the at least one communication type. The custom notification may be created by navigating to a custom notifications user interface. The custom notification may also be created by automatically determining at least one custom notification setting. | 2015-10-22 |
20150301839 | MFT LOAD BALANCER - In various embodiments, a software load balancer is deployed to distribute incoming managed file traffic among multiple nodes running in a cluster. In one aspect, a separate instance of the software load balancer may be instantiated for each protocol that will be used (e.g., FTP, FTP-SSL & SSH-FTP). In one embodiment, the software load balancer includes a standalone java application that is configured to run outside the purview of an application server. In a further embodiment, the software load balancer is able to manage transfers to multiple nodes in (e.g., multiple managed file transfer servers) in a cluster. Therefore, in one embodiment, only one instance of the software load balance needs to be deployed. | 2015-10-22 |
20150301840 | Dependency-driven Co-Specialization of Specialized Classes - The loading or operation of a specialized class may trigger the specialization of other classes. A compiler may be configured to recognize dependency relationships between generic classes and to describe the classes in terms of the type variables of the triggering types (e.g., the types and/or type parameterizations) that trigger the specialization of classes based on the specialization of a first class. A compiler may include information, such as structural references, indicating dependency relationships between classes when generating class files. Thus, the class file may include information indicating that a class extends a class resulting from applying a specialization code generator to an argument. Loading a first class may trigger the loading of a second class described by a structural description such that a specializer (and/or class loader) may apply the structural description to generate and load the second class for the particular parameterization. | 2015-10-22 |
20150301841 | BINARY TRANSLATION REUSE IN A SYSTEM WITH ADDRESS SPACE LAYOUT RANDOMIZATION - Generally, this disclosure provides systems, methods and computer readable media for binary translation (BT) reuse. The system may include a (BT) module to translate a region of code from a first instruction set architecture (ISA) to a second ISA, for execution associated with a first process. The BT module may also be configured to store a first physical page number associated with the translated code and the first process. The system may also include a processor to execute the translated code and to update a virtual address instruction pointer associated with the execution. The system may further include a translation reuse module to validate the translated code for reuse by a second process. The validation may include generating a second physical page number based on a page table mapping of the updated virtual address instruction pointer and matching the second physical page number to the stored first physical page number. | 2015-10-22 |
20150301842 | DETERMINING OPTIMAL METHODS FOR CREATING VIRTUAL MACHINES - A computer receives at least one requirement for a new VM. The computer identifies an existing VM to be modified during the generation of the new VM. The computer determines at least one step necessary to create the new VM configuration from the existing VM. The computer presents at least one pathway to the new VM from the existing VM. The computer receives a selection of a presented pathway to create the new VM. | 2015-10-22 |
20150301843 | Content-Based Swap Candidate Selection - Techniques for building a list of swap candidate pages for host swapping are provided. In one embodiment, a host system can determine a swap target virtual machine (VM) and a target number of swap candidate pages. The host system can further select a memory page from a memory space of the swap target VM and can check whether the memory page is sharable or compressible. If the memory page is sharable or compressible, the host system can add the memory page to the list of swap candidate pages. | 2015-10-22 |
20150301844 | SHADOW VNICS FOR THE CONTROL AND OBSERVABILITY OF IO VIRTUAL FUNCTIONS - A method for controlling a network interface controller (NIC). The method includes receiving, by a host operating system (OS) executing on a computer system, an instruction to map the NIC virtual function (VF) to a first virtual machine executing on the computer system. The method further includes allocating, according to the NIC VF, first NIC resources on a physical NIC operatively connected to the computer system, mapping the NIC VF to the first virtual machine, creating, in the host OS, a shadow virtual NIC for the first NIC resources allocated to the NIC VF, assigning the shadow virtual NIC to the first virtual machine, receiving, by the physical NIC, a first packet targeting the first virtual machine, and sending the first packet directly to the first virtual machine. | 2015-10-22 |
20150301845 | Method And System For Closing Application - A method and system for closing an application program are provided. The method comprises: a deployment platform determining a virtual machine relevant to an application system according to configuration information of the application system when the application system is to be closed; and the deployment platform sending an indication message for closing the application system to the virtual machine relevant to the application system, wherein, the indication message for closing the application system is used for indicating the virtual machine relevant to the application system to close application programs in the application system in sequence. Through the above-mentioned technical scheme, the deployment platform indicates a virtual machine relevant to an application system required to be closed to close application programs in the application system in sequence, which makes multiple application programs of the application system deployed on multiple virtual machines can be closed in sequence. | 2015-10-22 |
20150301846 | Automated Network Configuration of Virtual Machines in a Virtual Lab Environment - Methods, systems, and computer programs for creating virtual machines (VM) and associated networks in a virtual infrastructure are presented. The method defines virtual network templates in a database, where each virtual network template includes network specifications. A configuration of a virtual system is created, which includes VMs, virtual lab networks associated with virtual network templates, and connections from the VMs to the virtual lab networks. Further, the configuration is deployed in the virtual infrastructure resulting in a deployed configuration. The deployment of the configuration includes instantiating in the virtual infrastructure the VMs of the configuration, instantiating in the virtual infrastructure the virtual lab networks, retrieving information from the database, and creating and executing programming instructions for the VMs. The database information includes the network specifications from the virtual network templates associated with the virtual lab networks, and network resources for the virtual lab networks from a pool of available network resources. The programming instructions are created for the particular Guest Operating System (GOS) running in each VM based on the GOS and on the retrieved database information. When executed in the corresponding VM GOS, the programming instructions configure the VMs network interfaces with the corresponding network specifications. | 2015-10-22 |
20150301847 | Environment Virtualization - An environment virtualization infrastructure (EVI) is made up of storage, network, and compute elements which are virtualized in a virtual platform that is implemented on a hardware platform. In some embodiments, the EVI is dynamic and is expressed as a collection of downloadable data structures. The virtual platform can include an EVI with a definable topology and an emulator that configures various components of the EVI automatically. In some embodiments, the emulator is invoked via an Application Programming Interface. The EVI can be implemented as a Software as a Service. In some embodiments, the EVI includes virtual environments that have routers, switches, operating systems, and software applications. | 2015-10-22 |
20150301848 | METHOD AND SYSTEM FOR MIGRATION OF PROCESSES IN HETEROGENEOUS COMPUTING ENVIRONMENTS - Migrating a process from a source system with a source operating system to a target system with a target operating system is provided, where the source and target systems or source and target operating system are incompatible. The migrating includes: employing an emulator at the target system to execute code associated with the process being migrated, the emulator performing: translating of system calls and runtime library calls for the source operating system to calls of the target operating system using a system call translator and runtime library translator; translating source application code associated with the process into binary target application code executable on the target system, using a compiler where the source application code has not been translated; and executing the translated binary target application code on the target system, and discontinuing emulation of the process at the target system once the executing begins. | 2015-10-22 |
20150301849 | APPARATUS AND METHOD FOR VALIDATING APPLICATION DEPLOYMENT TOPOLOGY IN CLOUD COMPUTING ENVIRONMENT - The present invention relates to an apparatus and a method for validating application deployment topology in a cloud environment. There is provided an apparatus for validating application deployment topology in a cloud environment comprising: a topology skeleton generator configured to generate, based on multiple VMs and script packages running on the VMs created by a user and required to deploy an application as well as running order of script packages and data dependency between script packages set by the user, a topology skeleton that comprises at least scripts of script packages of respective VMs and running order of the script packages; and a simulator configured to simulate a runtime environment in the cloud environment at the apparatus, thereby validating the running order and data dependency with respect to the topology skeleton, wherein the simulator is installed in the apparatus by using a simulator installation package retrieved from the cloud environment. | 2015-10-22 |
20150301850 | APPARATUS AND METHOD FOR PROVIDING VIRTUALIZATION SERVICES - An apparatus and method for providing virtualization services in a mobile device are provided. The virtualization service providing apparatus includes an installer module configured to receive a hypervisor image and an agent for installing the hypervisor image, from a host server, a virtualization service module configured to store the hypervisor image and the agent and to transmit a request for rebooting the mobile device, in response to determining that the hypervisor image and the agent are authenticated by an authentication server, and a power management module configured to receive the request, and to reboot the mobile device. | 2015-10-22 |
20150301851 | MANAGING A SERVER TEMPLATE - A non-transitory computer-readable storage medium may comprise instructions for managing a server template stored thereon. When executed by at least one processor, the instructions may be configured to cause at least one computing system to at least convert the server template to a corresponding virtual machine, manage the corresponding virtual machine, and convert the corresponding virtual machine back into a template format. | 2015-10-22 |
20150301852 | SYSTEM FOR DOWNLOADING AND EXECUTING A VIRTUAL APPLICATION - A virtual process manager for use with a client application. Both the virtual process manager and the client application are installed on a client computing device. The client application is configured to receive a user command to execute a virtual application at least partially implemented by a virtualized application file stored on a remote computing device. In response to the user command, the client application commands to the virtual process manager to execute the virtualized application file. Without additional user input, the virtual process manager downloads the virtualized application file from the remote computing device and executes the virtual application at least partially implemented by the downloaded virtualized application file on the client computing device. The client application may comprise a conventional web browser or operating system shell process. | 2015-10-22 |
20150301853 | SYSTEMS AND METHODS FOR PHYSICAL AND LOGICAL RESOURCE PROFILING, ANALYSIS AND BEHAVIORAL PREDICTION - Methods and/or systems for performing workload analysis within an arrangement of interconnected computing devices, such as a converged infrastructure, are disclosed. A prediction system may generate a workload associated with physical and/or logical components of the converged infrastructure that are utilized to execute a client resource. The prediction system may monitor the utilization behavior of the various logical and/or physical components associated with the workload over a particular period of time to generate a workload profile. Subsequently, the prediction system may execute a prediction workload analysis algorithm that accesses the workload profile to identify optimal physical resources in the converged infrastructure that may be available to execute other workloads. | 2015-10-22 |
20150301854 | APPARATUS AND METHOD FOR HARDWARE-BASED TASK SCHEDULING - Provided are a method and apparatus for task scheduling based on hardware. The method for task scheduling in a scheduler accelerator based on hardware includes: managing task related information based on tasks in a system; updating the task related information in response to a request from a CPU; selecting a candidate task to be run next after a currently running task for each CPU on the basis of the updated task related information; and providing the selected candidate task to each CPU. The scheduler accelerator supports the method for task scheduling based on hardware. | 2015-10-22 |
20150301855 | PREFERENTIAL CPU UTILIZATION FOR TASKS - In a distributed server storage environment, a set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 2015-10-22 |
20150301856 | TASK PROCESSOR - A task processor includes a CPU, a save circuit, and a task control circuit. A task control circuit is provided with a task selection circuit and state storage units associated with respective tasks. When executing a predetermined system call instruction, the CPU notifies the task control circuit accordingly. When informed of the execution of a system call instruction, the task control circuit selects a task to be subsequently executed in accordance with an output from the selection circuit. When an interrupt circuit receives a high-speed interrupt request signal, the task switching circuit controls the state transition of a task by executing an interrupt handling instruction designated by the interrupt circuit. | 2015-10-22 |
20150301857 | Activity Interruption Management - In response to determining that an activity has been postponed (e.g., interrupted or deferred), a computer system stores a record indicating that the activity is postponed. In response to determining that another activity has become active, the computer system stores a record indicating that the other activity is active. The computer system reminds a user to return to the postponed in response to determining that a reminder condition associated with the postponed activity has been satisfied. For example, the computer system may remind the user to return to the postponed activity in response to determining that the other activity has been completed. | 2015-10-22 |
20150301858 | MULTIPROCESSORS SYSTEMS AND PROCESSES SCHEDULING METHODS THEREOF - Scheduling methods for a multi-core processor system including multiple processors are provided. First, a process to be executed is chosen from a ready queue and analyzed to obtain a power consumption value of the process to be executed. Next, an idle processor is chosen from the processors and a total power consumption value of system through which the process to be executed is being executed in the idle processor is estimated to obtain a first prediction result based on the obtained power consumption value. It is then determined whether to execute the process to be executed in the idle processor according to the first predicted value and a predetermined upper limit value. In some embodiments, the scheduling method may further provide preemption scheduling such that the process with high priority can be preferentially executed and process can flexible switch among different processor core clusters. | 2015-10-22 |
20150301859 | METHOD FOR SELECTING ONE OF SEVERAL QUEUES - A method for selecting one of several queues and for extracting one or more data segments from a selected queue for transmitting with the aid of an output interface includes: selecting the output interface by a first scheduler; selecting a number of queues by a second scheduler; selecting one queue from the number of queues by a third scheduler; and sending one or more data segments from the selected queue to the output interface for transmission. | 2015-10-22 |
20150301860 | TECHNIQUES FOR GENERATING INSTRUCTIONS TO CONTROL DATABASE PROCESSING - An apparatus includes a task selector to receive an indication of a database task to be performed, wherein the database task includes a set of subtasks; a source selector to receive an indication of a source device to perform the set of subtasks, and to retrieve from the source device an indication of a processing environment currently available within the source device that includes an identity and version level of a database routine of the source device; and an instruction generator to determine a set of languages able to be interpreted by the database routine based on the identity and version level, select a language of the set of languages in which to generate instructions for each subtask based on the processing environment, and generate and transmit the instructions to the source device. | 2015-10-22 |
20150301861 | INTEGRATED MONITORING AND CONTROL OF PROCESSING ENVIRONMENT - A method of managing components in a processing environment is provided. The method includes monitoring (i) a status of each of one or more computing devices, (ii) a status of each of one or more applications, each application hosted by at least one of the computing devices, and (iii) a status of each of one or more jobs, each job associated with at least one of the applications; determining that one of the status of one of the computing devices, the status of one of the applications, and the status of one of the jobs is indicative of a performance issue associated with the corresponding computing device, application, or job, the determination being made based on a comparison of a performance of the computing device, application, or job and at least one predetermined criterion; and enabling an action to be performed associated with the performance issue. | 2015-10-22 |
20150301862 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 2015-10-22 |
20150301863 | Allocating Resources to Threads Based on Speculation Metric - Methods, reservation stations and processors for allocating resources to a plurality of threads based on the extent to which the instructions associated with each of the threads are speculative. The method comprises receiving a speculation metric for each thread at a reservation station. Each speculation metric represents the extent to which the instructions associated with a particular thread are speculative. The more speculative an instruction, the more likely the instruction has been incorrectly predicted by a branch predictor. The reservation station then allocates functional unit resources (e.g. pipelines) to the threads based on the speculation metrics and selects a number of instructions from one or more of the threads based on the allocation. The selected instructions are then issued to the functional unit resources. | 2015-10-22 |
20150301864 | RESOURCE ALLOCATION METHOD - A resource allocation method adapted to a mobile device having a multi-core central processing unit (CPU) is provided. The CPU executes at least one application. The method includes steps as follows. A usage status of each of the at least one application is obtained according to a level of concern of a user for each of the at least one application. A sensitivity of at least one thread of each of the at least one application is determined according to the usage status of each of the at least one application. Resources of the CPU are allocated according to the sensitivity of the at least one thread run by the cores. | 2015-10-22 |
20150301865 | HARDWARE RESOURCE ALLOCATION FOR APPLICATIONS - In some examples, in a virtual environment, multiple virtual machines may be executing on a physical computing node. Each of the multiple virtual machines may host one or more applications, each of which utilizes at least a portion of a hardware resource of the physical computing node. A hypervisor of the virtual environment may be configured to recognize utilization patterns of the applications and allocate portions of the hardware resource to each of the applications in accordance with respective utilization patterns of the applications. | 2015-10-22 |
20150301866 | ANALYSIS METHOD, ANALYSIS APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN ANALYSIS PROGRAM - Relating to services each including a plurality of processes having a plurality of hierarchies, service information is stored in which processes for each service are grouped in a predetermined hierarchy taking presence or absence of a common hierarchy into consideration. Then, based on log data and the service information relating to a plurality of services, a first decision process for deciding presence or absence of an abnormality relating to a process included in one or more services is performed. Further, a second decision process is performed for developing, where a process decided as an abnormal process is a grouped grouping process, the grouping process decided as an abnormal process to one or more processes in a lower hierarchy than the predetermined hierarchy based on the service information and deciding presence or absence of an abnormality relating to the one or more developed processes. | 2015-10-22 |
20150301867 | DETERMINISTIC REAL TIME BUSINESS APPLICATION PROCESSING IN A SERVICE-ORIENTED ARCHITECTURE - Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information. | 2015-10-22 |
20150301868 | SHARED RESOURCE SEGMENTATION - Methods and systems for resource segmentation include dividing a time horizon to be partitioned into time slots based on a minimum partition size; determining resource usage for multiple virtual machines in each of the plurality of time slots; determining a set of partitioning schemes that includes every possible partitioning of the time slots into a fixed number of partitions; for each partitioning scheme in the set of partitioning schemes, determining a cost using a processor based on a duration of each partition and a resource usage metric; and selecting a partitioning scheme that has a lowest cost. | 2015-10-22 |
20150301869 | LOAD BALANCING WITH GRANULARLY REDISTRIBUTABLE WORKLOADS - In one embodiment, a computer-implemented method includes receiving a plurality of tasks to be assigned to a plurality of subgroups of virtual servers. A first plurality of the tasks is assigned to a first subgroup, where the first subgroup includes two or more virtual servers. For each of the first plurality of tasks assigned to the first subgroup, a virtual server is selected within the first subgroup, and the task is assigned to the selected virtual server. A first virtual server is migrated, by a computer processor, from the first subgroup of virtual servers to a second subgroup of virtual servers, if at least one predetermined condition is met, where the migration maintains in the first subgroup at least one of the first plurality of tasks assigned to the first subgroup. | 2015-10-22 |
20150301870 | Systems and Methods for Reordering Sequential Actions - Systems and methods for reordering sequential actions in a process or workflow by determining which actions are required to enable another action in the process or workflow. | 2015-10-22 |
20150301871 | BUSY LOCK AND A PASSIVE LOCK FOR EMBEDDED LOAD MANAGEMENT - Embodiments relate to managing exclusive control of a shareable resource between a plurality of concurrently executing threads. An aspect includes determining the number of concurrently executing threads waiting for exclusive control of the shareable resource. Another aspect includes, responsive to a determination that the number of concurrently executing threads waiting for exclusive control of the shareable resource exceeds a pre-determined value, one or more of said concurrently executing threads terminating its wait for exclusive control of the shareable resource. Another aspect includes, responsive to a determination that the number of concurrently executing threads waiting for exclusive control of the shareable resource is less than a pre-determined value, one or more of said one or more concurrently executing threads which terminated its wait for exclusive control of the shareable resource, restarting a wait for exclusive control of the shareable resource. | 2015-10-22 |
20150301872 | PROCESS COOPERATION METHOD, PROCESS COOPERATION PROGRAM, AND PROCESS COOPERATION SYSTEM - A process cooperation method includes storing in a first storage device a first process result as a result of execution of a first process by a first processor and transmitting the first process result to a second processor, storing in a second storage device a second process result as a result of execution of a second process by the second processor based on the first process result received from the first processor, and transmitting the second process result to a third processor, and moreover transmitting the second process result and an identifier identifying the third processor to the first processor, and storing in the first storage device the second process result and the identifier received from the second processor by the first processor in association with the first process result. | 2015-10-22 |
20150301873 | METHOD AND SYSTEM FOR EXPANDING WEBAPP APPLICATION FUNCTION - A method and system for expanding a WebApp application function. The method comprises: adding a function expansion field which contains an expansion JS function and an address of a local application which responds to a request of the expansion JS function in the WebApp; sending each parameter of the function to a browser kernel by calling the expansion JS function; a WebApp frame setting an address of a local application program as the address of the local application which responds to the request of the expansion JS function according to a calling message received by the browser kernel; activating a target application program specified by the address according to the address of the local application; the activated target application program executing the expansion JS function according to the parameters of the expansion JS function and returning an execution result to the WebApp. | 2015-10-22 |
20150301874 | SYSTEMS AND METHODS OF SECURE DOMAIN ISOLATION INVOLVING SEPARATION KERNEL FEATURES - Systems and methods are disclosed for providing secure information processing. In one exemplary implementation, there is provided a method of secure domain isolation. Moreover, the method may include configuring a computing component with data/programming associated with address swapping and/or establishing isolation between domains or virtual machines, processing information such as instructions from an input device while keeping the domains or virtual machines separate, and/or performing navigating and/or other processing among the domains or virtual machines as a function of the data/programming and/or information, wherein secure isolation between the domains or virtual machines is maintained. | 2015-10-22 |
20150301875 | PERSISTING AND MANAGING APPLICATION MESSAGES - Embodiments are directed to automatically persisting specified messages, to providing versioning for persisted messages and to querying persisted messages. In one scenario, a computer system establishes a repository service that is subscribed to specified types of messages, where the messages are sent from publishers to a message queue maintained by a message managing service, and where each message includes a data structure that has certain data or a certain type of data. The repository service listens for the specified types of messages to which the repository service is subscribed and receives messages of the specified type to which the repository service is subscribed. The repository service further persists at least a portion of each message received by the repository service in a data store. | 2015-10-22 |
20150301876 | End-to- End Application Tracking Framework - Novel tools and techniques for tracing application execution and performance. Some of the tools provide a framework for monitoring the execution and/or performance of applications in an execution chain. In some cases, the framework can accomplish this monitoring with a few simple calls to an application programming interface on an application server. In other cases, the framework can provide for the passing of traceability data in protocol-specific headers of existing inter-application (and/or intra-application) communication protocols. | 2015-10-22 |
20150301877 | NAMING OF NODES IN NET FRAMEWORK - A system for naming a process being monitored that handles a requesting a framework such as a .NET framework. The process may be implemented by a .NET application framework within an IIS web server. The naming system allows for user readable names which are more than just numbers or indexes. The naming system is configured from a single location rather than from multiple locations, making it much easier to configure, change and update. | 2015-10-22 |
20150301878 | Interactive, Constraint-Network Prognostics and Diagnostics To Control Errors and Conflicts (IPDN) - Methods for interactively preventing and detecting conflicts and errors (CEs) through prognostics and diagnostics. Centralized and Decentralized Conflict and Error Prevention and Detection (CEPD) Logic is developed for prognostics and diagnostics over three types of real-world constraint networks: random networks (RN), scale-free networks (SFN), and Bose-Einstein condensation networks (BECN). A method is provided for selecting an appropriate CEPD algorithm from a plurality of algorithms having either centralized or decentralized CEPD logic, based on analysis of the characteristics of the CEPD algorithms and the characteristics of the constraint network. | 2015-10-22 |
20150301879 | SEMICONDUCTOR DEVICE HAVING A SERIAL COMMUNICATION CIRCUIT FOR A HOST UNIT - In a serial communication circuit, a data extracting section extracts reception data based on a reception clock signal with maximum speed. A pattern determining section compares a reception bit pattern of the reception data corresponding to a characteristic pattern and each of a plurality of detection bit patterns for the characteristic pattern, and indicates when the reception bit pattern matches one of the detection bit patterns. A periodicity determining section determines a period when the reception bit pattern matches the detection bit pattern, based on the pattern match indication, detects that the detection bit pattern emerges continuously in a stream of the reception data every the period, and determines a generation difference between transmission and reception speeds based on the detection bit pattern. A transmission rate setting section determines the transmission speed of a connected device transmitting the reception data based on the generation difference and maximum speed. | 2015-10-22 |
20150301880 | PROVIDING BOOT DATA IN A CLUSTER NETWORK ENVIRONMENT - A computer cluster includes a group of connected computers that work together essentially as a single system. Each computer in the cluster is called a node. Each node has a boot device configured to load an image of an operating system into the node's main memory. Sometimes the boot device of a first node experiences a problem that prevents the operating system from loading. This can affect the entire cluster. Some aspects of the disclosure, however, are directed to operations that determine the problem with the first node's boot device based on a communication sent via a first communications network. Further, the operations can communicate to the first node a copy of boot data from a second node's boot device. The copy of the boot data is sent via a second communications network different from the first communications network. The copy of the boot data can solve the first boot device's problem. | 2015-10-22 |
20150301881 | GENERATING A DATA STRUCTURE TO MAINTAIN ERROR AND CONNECTION INFORMATION ON COMPONENTS AND USE THE DATA STRUCTURE TO DETERMINE AN ERROR CORRECTION OPERATION - Provided are a computer program product, system, and method for generating data structure to maintain error and connection information on components and use the data structure to determine an error correction operation. For each of a plurality of first level components in enclosures connected to second level components, errors at the first level component and a connection between the first level component to one of the second level components are determined and error variables are set to indicate whether an error was reported at the first level component. A data structure is generated indicating connections among the first level components and the second level components. The error variable values and the data structure are used to determine an error correction operation with respect to at least one of the first level component and the connected second level component. | 2015-10-22 |
20150301882 | RESILIENT OPTIMIZATION AND CONTROL FOR DISTRIBUTED SYSTEMS - A method for controlling a system including a plurality of subsystems, includes receiving operational data from the plurality of subsystems of the system (S | 2015-10-22 |
20150301883 | SYSTEMS AND METHODS FOR PROPAGATING HEALTH OF A CLUSTER NODE - The present disclosure describes systems and methods for propagating port state to intermediary devices of a cluster in a static link aggregation environment. The methods and systems include a cluster comprising a plurality of intermediary devices in communication with a network device via a static link aggregation comprising aggregated ports from different intermediary devices of the cluster. A first device of the static link aggregation is configured to detect that a health of the first device is below a predetermined threshold and, responsive to the detection, identify one or more ports in the aggregated ports as down. A second device of the link aggregation is configured to, responsive to the identification, remove the ports from a distribution list for the static link aggregation. Upon detection that a health of a device is above a predetermined threshold, the first device may identify the ports as up. | 2015-10-22 |
20150301884 | CACHE MEMORY ERROR DETECTION CIRCUITS FOR DETECTING BIT FLIPS IN VALID INDICATORS IN CACHE MEMORY FOLLOWING INVALIDATE OPERATIONS, AND RELATED METHODS AND PROCESSOR-BASED SYSTEMS - Aspects disclosed herein include cache memory error detection circuits for detecting bit flips in valid indicators (e.g., valid bits) in cache memory following invalidate operations. Related methods and processor-based systems are also disclosed. If a cache hit results from access to a cache entry following an invalidate operation, a bit flip(s) has occurred in a valid indicator of the cache entry. This is because the valid indicator should indicate an invalid state following the invalidate operation of the cache entry, as opposed to a valid state. Thus, a cache memory error detection circuit is configured to determine if an invalidate operation was performed on the cache entry. The cache memory error detection circuit can cause a cache miss or an error for the accessed cache entry to be generated as a result, even though the valid indicator for the cache entry indicates a valid state due to the bit flip(s). | 2015-10-22 |
20150301885 | Neighboring Word Line Program Disturb Countermeasure For Charge-Trapping Memory - Techniques are provided for reading data from memory cells which are arranged along a common charge trapping layer. One example is in a 3D stacked non-volatile memory device. Memory cells on a word line layer WLLn can be disturbed by programming of memory cells on an adjacent word line layer WLLn+1, resulting in uncorrectable errors. In this case, the memory cells on WLLn can be read in a data recovery read operation which applies an elevated pass voltage to WLLn+1. The elevated pass voltage causes a decrease and narrowing of the threshold voltages on WLLn which facilitates reading. The data recovery read operation compensates for the lower threshold voltages of the cells by lowering the control gate voltage, raising the source voltage or adjusting a sensing period, demarcation level or pre-charge level in sensing circuitry. The elevated pass voltage can be stepped up in repeated read attempts until there are no uncorrectable errors or a limit is reached. | 2015-10-22 |
20150301886 | INFORMATION PROCESSING APPARATUS, SYSTEM, AND INFORMATION PROCESSING METHOD - The apparatus comprises a register, a transferring unit that transfers data stored in a first memory to a second memory, and a calculator that applies a checksum operation to the data being transferred by the transferring unit. When a first mode is set, the calculator transmits the result of the checksum operation to the transferring unit, and the transferring unit transfers the result to the second memory. When a second mode is set, the calculator applies the checksum operation to partial data that is included in the data and has been specified as a target of the checksum operation, and transmits the result of the checksum operation applied to the partial data to the register. | 2015-10-22 |
20150301887 | HIGH-SPEED MULTI-BLOCK-ROW LAYERED DECODER FOR LOW DENSITY PARITY CHECK (LDPC) CODES - High-speed multi-block-row layered decoding for low density parity check (LDPC) codes is disclosed. In a particular embodiment, a method, in a device that includes a decoder configured to perform an iterative decoding operation, includes processing, at the decoder, first and second block rows of a layer of a parity check matrix simultaneously to generate a first output and a second output. The method includes performing processing of the first output and the second output to generate a first result of a first computation and a second result of a second computation. A length of a “critical path” of the decoder is reduced as compared to a critical path length in which a common feedback message is computed. | 2015-10-22 |
20150301888 | METHOD, MEMORY CONTROLLER, AND MEMORY SYSTEM FOR READING DATA STORED IN FLASH MEMORY - An exemplary method for reading data stored in a flash memory includes: selecting an initial gate voltage combination from a plurality of predetermined gate voltage combination options; controlling a plurality of memory units in the flash memory according to the initial gate voltage combination, and reading a plurality of bit sequences; performing a codeword error correction upon the plurality of bit sequences, and determining if the codeword error correction successful; if the codeword error correction is not successful, determining an electric charge distribution parameter; determining a target gate voltage combination corresponding to the electric charge distribution parameter by using a look-up table; and controlling the plurality of memory units to read a plurality of updated bit sequences according to the target gate voltage combination. | 2015-10-22 |
20150301889 | NON-VOLATILE SEMICONDUCTOR STORAGE DEVICE AND METHOD OF TESTING THE SAME - Provided is a non-volatile semiconductor storage device which can be downsized with a simple circuit without impairing the function of an error correcting section, and a method of testing the non-volatile semiconductor storage device. An error correction circuit is configured to perform error detection and correction of merely the same number of bits as data bits, and a circuit for performing error detection and correction of check bits is omitted to downsize the circuit. A multiplexer for, in a testing state, replacing a part of the data bits read out from a storage element array with the check bits, and inputting the check bits to the error correction circuit is provided. Thus, error detection and correction of the check bits are performed to enable shipment inspection concerning the check bits as well. | 2015-10-22 |
20150301890 | APPARATUS FOR ERROR DETECTION IN MEMORY DEVICES - The invention relates to an apparatus for transfer of data elements between a bus controller, such as a CPU, and a memory controller. An address translator is arranged to receive a write address from the CPU, to modify the write address and to send the modified write address to the memory controller. An ECC calculator is arranged to receive write input data associated with the write address, from the CPU, and to generate an error correction code on the basis of the write input data. A concatenator is arranged to receive the write input data from the CPU, and to receive the error correction code from the ECC calculator, and to concatenate the write input data and the error correction code to obtain write output data, and to send the write output data to the memory controller. | 2015-10-22 |
20150301891 | Data Recovery Method and Device - Technologies are described herein for recovering data in a storage device comprising a controller and a plurality of storage units. The controller receives a data stream, and divides the data stream into plurality of data blocks, obtains a code blocks using the plurality of data blocks. When there is one or more blocks with damaged data in the plurality of data blocks and the code block, the controller obtains a sub-block from the Mth bit to the Nth bit of each block in the plurality of data blocks and the code block as a set, and reconstructs data in one or more sub-blocks with damaged data using other sub-blocks with undamaged data in the set. | 2015-10-22 |
20150301892 | MEMORY SYSTEM - A memory system comprises an encoding processing circuit | 2015-10-22 |
20150301893 | DISTRIBUTED STORAGE AND COMMUNICATION - Storing, retrieving, transmitting and receiving data ( | 2015-10-22 |
20150301894 | ADAPTIVE REBUILD SCHEDULING SCHEME - Method and apparatus for redundant array of independent disks (RAID) recovery are disclosed. In one embodiment, a RAID controller schedules requests to rebuild failed drives based on the wear state of secondary drives and input/output (I/O) activity. The controller may be configured to assign higher scheduling priority to rebuild requests only when necessary, so as to reduce the time needed for the rebuild and to avoid affecting performance of the RAID system. In particular, the controller may give higher priority to rebuild requests if secondary drive failure is likely. In addition, the controller may determine when write-intensive periods occur, and assign lower priority to rebuild requests during such periods. | 2015-10-22 |
20150301895 | ADAPTIVE REBUILD SCHEDULING SCHEME - Method and apparatus for redundant array of independent disks (RAID) recovery are disclosed. In one embodiment, a RAID controller schedules requests to rebuild failed drives based on the wear state of secondary drives and input/output (I/O) activity. The controller may be configured to assign higher scheduling priority to rebuild requests only when necessary, so as to reduce the time needed for the rebuild and to avoid affecting performance of the RAID system. In particular, the controller may give higher priority to rebuild requests if secondary drive failure is likely. In addition, the controller may determine when write-intensive periods occur, and assign lower priority to rebuild requests during such periods. | 2015-10-22 |
20150301896 | LOAD BALANCING ON DISKS IN RAID BASED ON LINEAR BLOCK CODES - An improved technique involves assigning a different generator matrix to each data stripe of the redundant disk array such that all of the different generator matrices represent the same code. For example, when a k×n generator matrix G represents a linear code C, k being the block length and n the code length, then for any invertible k×k matrix P, the matrix G′=PG is also a generator that represents C. When C is a systematic code, then G consists of a k×k identity matrix representing payload data concatenated with a k×(n−k) parity matrix representing parity data. Certain matrices P represent row operations on G, meaning that the matrix G′ may have the columns of the identity matrix in G to different locations in G′. | 2015-10-22 |
20150301897 | METHOD AND SYSTEM FOR MANAGING SECURE ELEMENT - A method and system for safely backing up secure information of an SE in a terminal to a secure server, and then safely restoring the backed-up secure information to the SE of the terminal or another terminal is provided. The method for managing the SE includes: identifying secure information; generating backup data; transmitting the backup data to a secure server; and restoring the backup data to the SE or another SE. The identifying the secure information includes, in response to a backup command on the secure information of the SE, identifying the secure information stored in the SE. The generating the backup data includes generating backup data using at least part of the identified secure information. The transmitting the backup data to the secure server include setting a secure channel between the secure server and the SE, and transmitting the backup data from the SE to the secure server through the secure channel and storing the backup data. The restoring the backup data to the SE or another SE includes, in response to a restoration command on the secure information, restoring the backup data. | 2015-10-22 |
20150301898 | CONDITIONAL SAVING OF INPUT DATA - This document relates to preserving input data. One example includes obtaining a request that a service perform processing on input data to produce an output representation of the input data. This example also includes applying criteria to the request, and preserving the input data responsive to determining that the criteria are met. | 2015-10-22 |
20150301899 | SYSTEMS AND METHODS FOR ON-LINE BACKUP AND DISASTER RECOVERY WITH LOCAL COPY - Systems and methods are disclosed for rapidly restoring client data set for a computer by storing the client data and one or more pat sets required to revert to one or more version of the client data on a remote server; storing a local copy of the replicated client data on a local data storage device coupled to the computer; receiving a request to revert to a predetermined version of the client data; using the local copy as a seed, receiving a patch set corresponding to a predetermined version; and updating the local copy using the patch set to generated the predetermined version. | 2015-10-22 |
20150301900 | SYSTEMS AND METHODS FOR STATE CONSISTENT REPLICATION - Systems and methods are disclosed for state consistent replication of client data set on a client computer by generating a snapshot of the client data set on a local volume; synchronizing with a remote server volume corresponding to the local volume to create a copy of the client data set on the remote server; performing a master to slave replication of the data set; and taking a snapshot of the server data set to create a mirror of the snapshot of the client data set on the server. | 2015-10-22 |
20150301901 | SYSTEM AND METHOD FOR ADJUSTING MEMBERSHIP OF A DATA REPLICATION GROUP - A system that implements a data storage service may store data on behalf of storage service clients. The system may maintain data in multiple replicas of partitions that are stored on respective computing nodes in the system. A master replica for a replica group may increment a membership version indicator for the group, and may propagate metadata (including the membership version indicator) indicating a membership change for the group to other members of the group. Propagating the metadata may include sending a log record containing the metadata to the other replicas to be appended to their respective logs. Once the membership change becomes durable, it may be committed. A replica attempting to become the master of a replica group may determine that another replica in the group has observed a more recent membership version, in which case logs may be synchronized or snipped, or the attempt may be abandoned. | 2015-10-22 |
20150301902 | Systems, Methods, and Computer Program Products for Instant Recovery of Image Level Backups - Systems, methods, and computer program products are provided for instant recovery of a virtual machine (VM) from a compressed image level backup without fully extracting the image level backup file's contents to production storage. The method receives restore parameters and initializes a virtual storage. The method attaches the virtual storage to a hypervisor configured to launch a recovered VM. The method stores virtual disk data changes inflicted by a running operating system (OS), applications, and users in a changes storage. The method provides the ability to migrate the actual VM disk state (taking into account changed disk data blocks accumulated in changes storage) so as to prevent data loss resulting from the VM running during the recovery and accessing virtual storage, to production storage without downtime. In embodiments, the method displays receives restore parameters in an interactive interface and delivers the recovery results via an automated message, such as an email message. | 2015-10-22 |
20150301903 | CROSS-SYSTEM, USER-LEVEL MANAGEMENT OF DATA OBJECTS STORED IN A PLURALITY OF INFORMATION MANAGEMENT SYSTEMS - Systems and methods are disclosed for cross-system user-level management of data objects stored in one or more information management systems, and for user-level management of data storage quotas in information management systems, including data objects in secondary storage. An illustrative quota manager is associated with one or more information management systems. The quota manager comprises a quota value representing the maximum amount of data storage allowed for a given end-user's primary and secondary data in the one or more information management systems. The quota manager determines whether data associated with the end-user has exceeded the storage quota, and if so, prompts the end-user to select data for deletion, the deletion to be implemented globally, across the primary and secondary storage subsystems of the respective one or more information management systems. Meanwhile, so long as the quota is exceeded, the quota manager instructs storage managers to block backups of end-user's data. | 2015-10-22 |
20150301904 | MANAGING BACK UP OPERATIONS FOR DATA - Backup operations for data resources can be managed as follows. At least one data resource residing on at least one data storage device is identified. An information processing system automatically determines that the at least one data resource fails to be associated with a backup policy. In response to the at least one data resource failing to be associated with a backup policy, at least one backup policy is associated with the at least one data resource. | 2015-10-22 |
20150301905 | DISPERSED STORAGE NETWORK WITH DATA SEGMENT BACKUP AND METHODS FOR USE THEREWITH - A method begins with a processing module providing a data segment. The method continues with the processing module retrieving a plurality of first slices, corresponding to a previous revision of the data segment, from the distributed storage network. The method continues with the processing module recreating the previous revision of the data segment from the plurality of first slices corresponding to the previous revision of the data segment. The method continues with the processing module determining if the previous revision of the data segment compares unfavorably to the data segment. The method continues with the processing module storing the data segment in the DSN when determined that the previous version of the data segment compares unfavorably to the data segment. | 2015-10-22 |
20150301906 | Resolving Failed Mirrored Point-in-Time Copies with Minimum Disruption - When the mirrored point in time copy fails, at that point in time all the data for making the source and target of the point in time copy consistent is available on secondary volumes at disaster recovery site. The data for the source and target of the failed point in time copy are logically and physically equal at that point in time. This logical relationship can be maintained, and protected against ongoing physical updates to the affected tracks on the source secondary volume, by first reading the affected tracks from the source secondary volume, copying the data to the target secondary volume, and then writing the updated track to the source secondary volume. | 2015-10-22 |