20th week of 2017 patent applcation highlights part 45 |
Patent application number | Title | Published |
20170139686 | DYNAMIC SOFTWARE ASSEMBLY - An improved system and method for updating software is described. The system, upon detecting one or more changes within the set of eligibility attribute values associated with the one or more particular components of previously-provided software, selects a replacement component. The component is selected based on one or more changed eligibility attribute values within the set of eligibility attribute values, and the metadata of the user device. Using the replacement component, the replacement software is constructed and sent to the user device. | 2017-05-18 |
20170139687 | PROGRAM CODE LIBRARY SEARCHING AND SELECTION IN A NETWORKED COMPUTING ENVIRONMENT - An approach for integrated development environment (IDE)-based program code library searching and selection in multiple programming languages in a networked computing environment is provided. In a typical embodiment, a search request (e.g., to locate a desired program code library) will be received in an IDE and parsed. The search request generally includes a set of annotations corresponding to at least one of: a primary program code language of the program code library, an alternate program code language of the program code library, or a method pair associated with the program code library. A search of at least one program code library repository will then be conducted based on the set of annotations, and a set of matching results will be generated. The set of matching results may include one or more program code libraries, and may be provided to a device hosting the IDE. | 2017-05-18 |
20170139688 | USER INTERFACE AREA COVERAGE - A method for user interface (UI) automation area coverage is presented. The method extracts document information from a unit test class, the unit test class being code used to test a user interface (UI). The method searches for a keyword within the extracted document information to find a keyword match. The method receives a weight factor from a user the weight factor giving more importance to certain keywords over the other keywords. The method weights specified keywords based on a weight factor, the weight factor increasing or decreasing an importance to the specified keywords. The method assigns a weight score to each keyword match based on the number of keyword matches and the weight factor. Furthermore, the method generates a user interface report, the UI report comprising the weight score. | 2017-05-18 |
20170139689 | CACHING METHOD TYPES CREATED FROM METHOD DESCRIPTOR STRINGS ON A PER CLASS LOADER BASIS - A method for caching a MethodType object. The method may include identifying, by a processor, a plurality of classes associated with a method descriptor string. The method may also include determining the identified plurality of classes are loaded into a language runtime environment associated with an object oriented programming language. The method may further include creating the MethodType object using the identified plurality of classes. The method may also include storing the created MethodType object in a cache. The method may further include transmitting the stored MethodType object to the language runtime environment. | 2017-05-18 |
20170139690 | UNIVERSAL TRANSCOMPILING FRAMEWORK - Described herein is a transcompiling framework. In accordance with one aspect, the framework generates a source abstract syntax representation corresponding to source code written in a source language. The framework may determine validity of constraints of a common denominator language by parsing the source abstract syntax representation, wherein the common denominator language is a set of intersects provided by the source language and reachable by a target language. In response to determining the constraints are valid, the source abstract syntax representation may be transformed to a target syntax representation associated with the target language. The target syntax representation is then printed as transcompiled source code in the target language. | 2017-05-18 |
20170139691 | POS APPLICATION DEVELOPMENT METHOD AND CLOUD SERVER - An embodiment of the present invention provides a POS application development method and a cloud server, which are configured to realize development and deployment of an application through the cloud server so as to shorten the development cycle of the POS application. The method of the embodiment of the present invention includes: by a cloud server, receiving and saving application project data transmitted from a WEB client; by the cloud server, processing the application project data and obtaining an application package; by the cloud server, performing signature processing for the application package and obtaining a signed application package; by the cloud server, transmitting the signed application package to a POS so that the POS realizes an corresponding application based on the signed application package. Another embodiment of the present invention further provides a cloud server. | 2017-05-18 |
20170139692 | SYSTEM FOR DISPLAYING NOTIFICATION DEPENDENCIES BETWEEN COMPONENT INSTANCES - The disclosed embodiments relate to a system that facilitates developing applications in a component-based software development environment. This system provides an execution environment comprising instances of application components and a registry that maps names to instances of application components. Within the registry, each entry is associated with a list of notification dependencies that specifies component instances to be notified when the registry entry changes. Upon receiving a command to display notification dependencies for the registry, the system generates and displays a dependency graph containing nodes representing component instances and arrows between the nodes representing notification dependencies between the component instances. Upon receiving a command to display a timeline for with the registry, the system generates and displays a timeline representing events associated with the registry in chronological order. | 2017-05-18 |
20170139693 | CODE EXECUTION METHOD AND DEVICE - Code execution method and device are provided. The method includes: converting a source code into an intermediate code which supports interpretive execution and compiled execution, where the compiled execution does not support a blocking operation; interpretively executing the intermediate code; dynamically profiling the intermediate code during the interpretive execution to obtain a profiling result; and if the profiling result meets a requirement of starting the compiled execution, switching to perform the compiled execution to the intermediate code. The method and the device may improve code execution efficiency. | 2017-05-18 |
20170139694 | SYSTEM AND METHOD FOR LINK TIME OPTIMIZATION - A method for link time optimization comprises parsing, by a compiler, an intermediate representation file to determine what symbols are present in the intermediate representation file. The method comprises providing the symbols to a linker and creating, by the linker, a symbol use tree of all the symbols that are present in the intermediate representation file and other symbols in binary code received by the linker. The method further comprises discarding, by the linker, any received objects for which no use can be identified and all dependencies of the objects. The method includes providing, from the linker to the compiler, a preserve list of symbols, the preserve list comprising a list of symbols proven used by the objects and the intermediate representation files. The method comprises compiling the intermediate representation files and the objects based on the preserve list of symbols, and deleting, by the linker, any remaining unused objects. | 2017-05-18 |
20170139695 | APPLICATION BLUEPRINTS BASED ON SERVICE TEMPLATES TO DEPLOY APPLICATIONS IN DIFFERENT CLOUD ENVIRONMENTS - Disclosed examples to configure an application blueprint involve selecting, during a runtime phase, a first service and a second service from a plurality of services mapped to a service template, the service template bound to a node by an application blueprint, the application blueprint generated during a design phase; generating, during the runtime phase, a first deployment profile to deploy a first application on the node in a cloud environment, the first deployment profile based on the application blueprint, the first deployment profile identifying the first service; and generating, during the runtime phase, a second deployment profile to deploy a second application on the node in the cloud environment based on the application blueprint, the second deployment profile identifying the second service. | 2017-05-18 |
20170139696 | Method and a system for merging several binary executables - The huge market of smartphones demands a vast number of applications with varying capabilities. For this, it is desirable that capabilities of two or more pieces of executables will be delivered together. However, several operation systems, such as Apple iOS, do not allow downloading an application with more than one binary executable file. | 2017-05-18 |
20170139697 | OFFLINE TOOLS INSTALLATION FOR VIRTUAL MACHINES - A method for managing tools on a virtual machine includes provisioning a virtual machine. The method also includes, before powering part the virtual machine, collecting a list of one or more tools on the virtual machine, and a version associated with each of the one or more tools. The method also includes determining if one or more new tools should be installed on the virtual machine. Responsive to determining if one or more new tools should be installed, the method includes retrieving, a tool image for each new tool to be installed. The method further includes installing the one or more new tools on a virtual disk file of the provisioned virtual machine using the tool images. | 2017-05-18 |
20170139698 | INFORMATION PROCESSING APPARATUS AND PROGRAM UPDATE CONTROL METHOD - In an information processing apparatus, an execution unit executes a first program and a second program. While the information processing apparatus acts as a slave that performs a program update process in response to instructions from a different information processing apparatus, an update control unit updates the first program to a first updated program as a program to be executed. While the information processing apparatus acts as a master that controls the program update process, the update control unit updates the second program to a second updated program as a program to be executed, and notifies a management apparatus of a progress state of the program update process according to the first program or first updated program. | 2017-05-18 |
20170139699 | STORAGE DEVICE FLASHING OPERATION - An example hard disk drive includes a multiplexer. The multiplexer is coupled to a communication interface, a hard disk drive controller, and a storage device. The multiplexer is to, in response to a detection of a first selection command via a first set of pins of the communication interface, route first firmware data from a second set of pins of the communication interface to the storage device during a first flashing operation. The hard disk drive controller is bypassed during the routing of the first firmware data. The multiplexer is also to, in response to a detection of a second selection command via the first set of pins, route second firmware data from the hard disk drive controller to the storage device during a second flashing operation. The second firmware data is received via the third set of pins. | 2017-05-18 |
20170139700 | MULTIPLE LASER MODULE PROGRAMMING OVER INTERNAL COMMUNICATIONS BUS OF FIBER LASER - An apparatus includes a plurality of laser system modules coupled to a communication bus that includes a module update bus, each laser system module including at least one module update port coupled to the module update bus and at least one micro controller unit (MCU) in communication with the module update port, wherein each laser system module is situated to receive a module update instruction over the module update bus based on a type identifier in a general purpose input/output (GPIO) register of the at least one MCU of the corresponding laser system module that indicates a laser system module type. | 2017-05-18 |
20170139701 | Platform for Full Life-Cycle of Vehicle Relationship Management - A vehicle software update system is contemplated. The system may be utilized to facilitate Over-the-Air (OTA) software updates according to various functional requirements, the interaction of modules with other modules and/or the relationship of the modules with engines. The vehicle software update system may include a message engine, a transformation engine, an operation engine, an intelligent engine, and/or an analytical engine. | 2017-05-18 |
20170139702 | METHOD OF CLOUD ENGINEERING SERVICES OF INDUSTRIAL PROCESS AUTOMATION SYSTEMS - This invention provides a method of cloud engineering services of industrial process automation systems. It can be used by users of both developers and customers. Implemented by cloud based hardware/software, in minimum, it comprises four types of blocks—the User Interface block, the Users block, at least one Virtual System block, and at least one Engineering Process block, wherein the User Interface block is a website application and/or a mobile device application; the Users block keeps users' profile, users' data including documents and binary files, and users' reviews; a Virtual System block is a cloud based virtual industrial process automation system simulating instrumentation data, functionalities of process controllers, and computer servers/workstations; and an Engineering Process block is a cloud based engineering service project, initiated by a project owner user, providing Virtual Systems and other cloud based resources, providing virtual desktop computers and/or server sessions for invited developer users, keeping project data including documentation and binary data, and managing project reviews. It can further comprise the Objects block, allowing owners/developers to upload saleable objects, keeping object binary data and object specification, and managing object reviews. It can further comprise the Market Place block, managing sale items, open order services, and closed order archive. The saleable items include saleable objects, engineering services, and rentals of Virtual Systems and other cloud based resources. The advantage of this invention is that all saleable items and/or user/object/project reviews are able to get validation via a Virtual System and/or project archives. | 2017-05-18 |
20170139703 | APPARATUS AND METHOD FOR SUPPORTING SHARING OF SOURCE CODE - In a shared change set server, a receiving section receives information on an undetermined change set and information on users sharing the undetermined change set from a terminal device used by a developer who has developed the change set. Subsequently, a shared change set management section prepares a shared change set containing the undetermined change set and information on users sharing the undetermined change set, and stores the shared change set in a shared change set storage section. A transmitting section thereafter transmits information on the shared change set to a terminal device used by a developer sharing the shared change set. | 2017-05-18 |
20170139704 | LIVE UPDATING OF A SHARED PLUGIN REGISTRY WITH NO SERVICE LOSS FOR ACTIVE USERS - Embodiments can enable the uploading of a newer version of a plugin package to a plugin service without affecting an existing user session that is using an older version of the plugin package. When a new user session begins, the plugin service can monitor one or more plugin packages and the versions used during the new user session. Throughout the user session, the plugin service continues to make the plugin packages available to the user regardless of newer versions being uploaded to the plugin service. In the meantime, multiple clients with different user sessions may be using different and possibly newer versions of the plugin packages at the same time. The plugin service can remove an older version of a plugin package when it determines that there are no longer any active user sessions utilizing the older version of the plugin package. | 2017-05-18 |
20170139705 | SOFTWARE PACKAGE ANALYZER FOR INCREASING PARALLELIZATION OF CODE EDITING - An identification of a software package stored in a code library and accessible for editing by at least a first developer and a second developer may be received. The software package may include a plurality of objects, and a first grant of editing access to the first developer for an object prohibits a second grant of editing access to the second developer for the object, while the first grant is valid. The object may be divided into a first object block and a second object block, characterized by first block development data and second block development data obtained from development data for the plurality of objects. Then, the first object block and the second object block may be identified for independent grants of editing access to the first developer and the second developer, based on the first block development data and the second block development data. | 2017-05-18 |
20170139706 | OPTIMIZING THREAD SELECTION AT FETCH, SELECT, AND COMMIT STAGES OF PROCESSOR CORE PIPELINE - An apparatus includes a buffer configured to store a plurality of instructions previously fetched from a memory, wherein each instruction of the plurality of instructions may be included in a respective thread of a plurality of threads. The apparatus also includes control circuitry configured to select a given thread of the plurality of threads dependent upon a number of instructions in the buffer that are included in the given thread. The control circuitry is also configured to fetch a respective instruction corresponding to the given thread from the memory, and to store the respective instruction in the buffer. | 2017-05-18 |
20170139707 | METHOD AND DEVICE FOR REGISTER MANAGEMENT - In a data processing method, a method and device for adjusting the number of registers used in a running thread according to a situation are disclosed. | 2017-05-18 |
20170139708 | DATA PROCESSING - Data processing circuitry comprises instruction queue circuitry to maintain one or more instruction queues to store fetched instructions; instruction decode circuitry to decode instructions dispatched from the one or more instruction queues, the instruction decode circuitry being configured to allocate one or more processor resources of a set of processor resources to a decoded instruction for use in execution of that decoded instruction; detection circuitry to detect, for an instruction to be dispatched from a given instruction queue, a prediction indicating whether sufficient processor resources are predicted to be available for allocation to that instruction by the instruction decode circuitry; and dispatch circuitry to dispatch an instruction from the given instruction queue to the instruction decode circuitry, the dispatch circuitry being responsive to the detection circuitry to allow deletion of the dispatched instruction from that instruction queue when the prediction indicates that sufficient processor resources are predicted to be available for allocation to that instruction by the instruction decode circuitry. | 2017-05-18 |
20170139709 | VECTOR LOAD WITH INSTRUCTION-SPECIFIED BYTE COUNT LESS THAN A VECTOR SIZE FOR BIG AND LITTLE ENDIAN PROCESSING - A method is disclosed for loading a vector with a processor. The method includes obtaining, by the processor, a variable-length vector load instruction. The method also includes determining that the vector load instruction specifies a vector register for a target, a memory address, and a length, wherein the memory address and the length are each specified in at least a general purpose register. The method also includes determining whether data should be loaded into the vector register using big endian byte-ordering or little endian byte-ordering. The method further includes loading data from memory into the vector register, wherein if the length is less than a length of the vector register, setting one or more residue bytes in the vector register to a pad value, wherein the residue bytes are determined based on the determined byte-ordering. | 2017-05-18 |
20170139710 | STREAMING ENGINE WITH CACHE-LIKE STREAM DATA STORAGE AND LIFETIME TRACKING - A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. The streaming engine fetches stream data ahead of use by the central processing unit core in a stream buffer constructed like a cache. The stream buffer cache includes plural cache lines, each includes tag bits, at least one valid bit and data bits. Cache lines are allocated to store newly fetched stream data. Cache lines are deallocated upon consumption of the data by a central processing unit core functional unit. Instructions preferably include operand fields with a first subset of codings corresponding to registers, a stream read only operand coding and a stream read and advance operand coding. | 2017-05-18 |
20170139711 | CONDITIONAL INSTRUCTION END OPERATION - A conditional instruction end facility is provided that allows completion of an instruction to be delayed. In executing the machine instruction, an operand is obtained, and a determination is made as to whether the operand has a predetermined relationship with respect to a value. Based on determining that the operand does not have the predetermined relationship with respect to the value, the obtaining and the determining are repeated. Based on determining that the operand has the predetermined relationship with respect to the value, execution of the instruction is completed. The obtaining the operand, the determining whether the operand has the predetermined relationship, the based on determining that the operand does not have the predetermined relationship with respect to the value, repeating the obtaining and the determining, and the based on determining that the operand has the predetermined relationship with respect to the value, completing execution of the instruction are performed as part of a single instruction having one operation code. | 2017-05-18 |
20170139712 | Efficient Emulation of Guest Architecture Instructions - A method includes determining that an operation should be performed to restore 80 bits stored in memory for an 80 bit register of a guest architecture on a host having 64-bit registers. The method further includes storing 64 bits from the 80 bits in a host register. The method further includes storing the remaining 16 bits from 80 bits in supplemental memory storage. The method further includes identifying a floating point operation that should be performed to operate on the 80-bit register for the guest architecture. As a result, the method further includes using the 64 bits in the host register and the remaining 16 bits stored in memory in a supplemental memory storage to translate a floating point number represented by the 80 bits to a 64-bit floating point number and store the 64-bit floating point number in the host register. | 2017-05-18 |
20170139713 | VECTOR STORE INSTRUCTION HAVING INSTRUCTION-SPECIFIED BYTE COUNT TO BE STORED SUPPORTING BIG AND LITTLE ENDIAN PROCESSING - A method is disclosed for storing vector data into memory with a processor. The method includes obtaining, by the processor, a variable-length vector store instruction. The method also includes determining that the vector store instruction specifies a vector register for a source, a memory address, and a length, where the memory address and the length are each specified in at least a general purpose register. The method also includes determining whether data should be stored into memory at the memory address using big endian byte-ordering or little endian byte-ordering. The method further includes storing data from the vector register into memory, where if the length is less than a length of the vector register, storing only the data from the vector register specified by the length. | 2017-05-18 |
20170139714 | CACHE STORING DATA FETCHED BY ADDRESS CALCULATING LOAD INSTRUCTION WITH LABEL USED AS ASSOCIATED NAME FOR CONSUMING INSTRUCTION TO REFER - A unified architecture for dynamic generation, execution, synchronization and parallelization of complex instruction formats includes a virtual register file, register cache and register file hierarchy. A self-generating and synchronizing dynamic and static threading architecture provides efficient context switching. | 2017-05-18 |
20170139715 | SYSTEMS, APPARATUSES, AND METHODS FOR PERFORMING DELTA DECODING ON PACKED DATA ELEMENTS - Systems, apparatuses, and methods for performing delta decoding on packed data elements of a source and storing the results in packed data elements of a destination using a single packed delta decode instruction are described. A processor may include a decoder to decode an instruction, and execution unit to execute the decoded instruction to calculate for each packed data element position of a source operand, other than a first packed data element position, a value that comprises a packed data element of that packed data element position and all packed data elements of packed data element positions that are of lesser significance, store a first packed data element from the first packed data element position of the source operand into a corresponding first packed data element position of a destination operand, and for each calculated value, store the value into a corresponding packed data element position of the destination operand. | 2017-05-18 |
20170139716 | HANDLING STALLING EVENT FOR MULTIPLE THREAD PIPELINE, AND TRIGGERING ACTION BASED ON INFORMATION ACCESS DELAY - A processing pipeline for processing instructions with instructions from multiple threads in flight concurrently may have control circuitry to detect a stalling event associated with a given thread. In response, at least one instruction of the given thread may be flushed from the pipeline, and the control circuitry may trigger fetch circuitry to reduce a fraction of the fetched instructions which are fetched from the given thread. A mechanism is also described to determine when to trigger a predetermined action when a delay in accessing information becomes greater than a delay threshold, and to update the delay threshold based on a difference between a return delay when the information is returned from the storage circuitry and the delay threshold. | 2017-05-18 |
20170139717 | BRANCH PREDICTION IN A DATA PROCESSING APPARATUS - An apparatus comprises instruction fetch circuitry to retrieve instructions from storage and branch target storage to store entries comprising source and target addresses for branch instructions. A confidence value is stored with each entry and when a current address matches a source address in an entry, and the confidence value exceeds a confidence threshold, instruction fetch circuitry retrieves a predicted next instruction from a target address in the entry. Branch confidence update circuitry increases the confidence value of the entry on receipt of a confirmation of the target address and decreases the confidence value on receipt of a non-confirmation of the target address. When the confidence value meets a confidence lock threshold below the confidence threshold and non-confirmation of the target address is received, a locking mechanism with respect to the entry is triggered. A corresponding method is also provided. | 2017-05-18 |
20170139718 | SYSTEM AND METHOD OF SPECULATIVE PARALLEL EXECUTION OF CACHE LINE UNALIGNED LOAD INSTRUCTIONS - A system and method of performing speculative parallel execution of a cache line unaligned load instruction including speculatively predicting whether a load instruction is unaligned with a cache memory, marking the load instruction as unaligned and issuing the instruction to a scheduler, dispatching the unaligned load instruction in parallel to first and second load pipelines, determining corresponding addresses for both load pipelines to retrieve data from first and second cache lines incorporating the target load data, and merging the data retrieved from both load pipelines. Prediction may be based on matching an instruction pointer of a previous iteration of the load instruction that was qualified as actually unaligned. Prediction may be further based on using a last address and a skip stride to predict a data stride between consecutive iterations of the load instruction. The addresses for both loads are selected to incorporate the target load data. | 2017-05-18 |
20170139719 | WEB BROWSER DATA COMMUNICATION IN SYSTEM BOOTING - A device includes a non-volatile memory storing a host operating system, a web browser application, a device operating system, and a device application, and control circuitry operable to receive a first request from a host to load the host operating system, provide the device operating system to the host in response to the first request, and receive a second request from the host to load the device application. The device application is configured to transmit a web browser file for the web browser application over the Internet, and generate a host reboot command after transmitting the web browser file. | 2017-05-18 |
20170139720 | DIGITAL ASSISTANT SETTING UP DEVICE - A digital assistance device that at least partially automatically sets up a device so as to operate within a system of one or more other devices. The digital assistance device at least partially automate the setup process that would usually come in a quick start guide. This is made possible by digitalizing the quick start guide so as to be interpretable by the digital assistance device. The digital assistance device can thereby determine, for each step, what it can do based on its information and capability, but also how the instructions can be simplified based on what it knows, and for what it cannot do, it passes all or a portion of the quick start guide for that step to the user via an intractable interface. Accordingly, potential manual setup tasks are offloaded to automation, thereby simplifying the setup of a device through technical automation. | 2017-05-18 |
20170139721 | IMPLEMENTATION OF RESET FUNCTIONS IN AN SOC VIRTUALIZED DEVICE - An apparatus and method for resetting a virtualized device are disclosed. The virtualized device may be coupled to a first port on a communication unit via a first link. The first port may send one or more instructions to the virtualized device via the first link using a first communication protocol. A processor may be configured to detect a reset condition for the virtualized device. In response to the detection of the reset condition for the virtualized device, the first port may disregard one or more transaction requests made by the virtualized device. The first port may further send an error message to the processor in response to receiving a Programmed Input/Output (PIO) request from the processor after the detection of the reset condition. | 2017-05-18 |
20170139722 | MEDIA FILE PLAYING METHOD AND DEVICE, MEDIUM AND BROWSER - A media file playing method and device. The method comprises: submitting information about a media file to a first window; judging whether the first window is a browser top window; if so, creating a sub-window in the browser top window, setting the sub-window as the first window, loading player logic in the browser top window, and playing the media file by using the player logic in the browser top window; otherwise, transmitting, by the first window, the information about the media file to the browser top window to which the first window belongs, and playing the media file by using the player logic in the browser top window. By means of the present invention, it can be achieved that a webpage player window can play a new media file without refreshing when playing requests of other webpage browser windows are received. | 2017-05-18 |
20170139723 | USER EXPERIENCE MAPPING IN A GRAPHICAL USER INTERFACE ENVIRONMENT - A method is disclosed that includes recording data of a plurality of interactions with a graphical user interface (GUI) environment by a user as the user executes one or more operations in the GUI environment. An interaction may include movement between sequential input events in the GUI environment caused by the user. A graphical representation of the recorded data of the plurality of interactions may be generated. The graphical representation may include movement between at least two sequential input events. The graphical representation may be displayed in combination with (e.g., overlayed on) the GUI environment on a computer processor display. The graphical representation may include a map that depicts sequential movement between two or more input events and a linear timeline of the movement between the input events caused by the user. | 2017-05-18 |
20170139724 | METHOD OF WORKSPACE MODELING - In a method of workspace modeling, a user selection of a step is received at a workflow region of a workspace modeler, the workflow region including a plurality of steps, wherein at least one step of the plurality of steps is unavailable for user selection prior to satisfaction of a prerequisite condition associated with another step of the plurality of steps, and wherein available steps of the plurality of steps are selectable in any order by a user. Access to a plurality of objects associated with the step is provided in response to the user selection of the step, wherein the plurality of objects are selectable by the user for inclusion in a content region of the workspace modeler. A user selection of an object is received at the workflow region. A visualization of the object is added to the content region in response to the user selection of the object, wherein the visualization of the object remains persistent within the content region regardless of a user selection of a different step of the available steps. | 2017-05-18 |
20170139725 | DYNAMIC CONFIGURATION SYSTEM FOR DISTRIBUTED SERVICES - A system includes a dynamic configuration property database for a computer-based service. The system executes an application program interface that couples the computer-based service to the database. The system reads a dynamic configuration property from the database while the computer-based service is executing and without requiring the computer-based service to cease execution. The system also provides the dynamic configuration property to the computer-based service while the computer-based service is executing such that the computer-based service can use the configuration property without requiring the computer-based service to cease execution and without having to restart the computer-based service. | 2017-05-18 |
20170139726 | SERIAL DEVICE EMULATOR USING TWO MEMORY LEVELS WITH DYNAMIC AND CONFIGURABLE RESPONSE - A digital logic device is disclosed that includes registers, SRAM, DRAM, and a processor configured to store in the registers an initial portion of a first response data to a command, and store in the SRAM the first response data. The processor is further configured to store in a lookup table the memory location and size of the first response data in the SRAM, store in the DRAM additional response data, and store in the lookup table the memory location and size of the additional response data in the DRAM. The processor is configured to receive the command from a host device, retrieve the first response data from the registers or the SRAM, and send the first response data to the host. If the command includes additional response data, the processor is configured to concurrently retrieve the additional response data from DRAM and send the additional response data to the host. | 2017-05-18 |
20170139727 | COMMUNICATION NODE UPGRADE SYSTEM AND METHOD FOR A COMMUNICATION NETWORK - According to one embodiment of the present disclosure, a communication node upgrade system includes a computer-based set of instructions that are executed to identify an existing virtual machine (VM) to be upgraded, obtain upgraded software for the existing VM, create a new VM in a virtualized computing environment using the upgraded software, and copy configuration information from the existing VM to the new VM. Thereafter, the operation of the existing VM may be replaced with the new VM in the communication network. The existing VM comprising at least one communication node that provides one or more communication services for a communication network in which the existing VM is executed in a virtualized computing environment, and the configuration information includes information associated with configuration of the existing VM to provide the communication services by the existing VM. | 2017-05-18 |
20170139728 | VIRTUAL MACHINE COLLABORATIVE SCHEDULING - A method for operating a processing system comprising in a hypervisor, negotiating with a host platform to determine compatibility between a virtual machine and the host platform, responsive to determining that the virtual machine is compatible with the host platform, receiving a control block from the virtual machine, tagging the control block with information that associates the control block with a control group, determining whether the hypervisor is a base hypervisor, and scheduling the control block for processing responsive to determining that the hypervisor is the base hypervisor. | 2017-05-18 |
20170139729 | MANAGEMENT OF A VIRTUAL MACHINE IN A VIRTUALIZED COMPUTING ENVIRONMENT BASED ON A CONCURRENCY LIMIT - One or more concurrency limits may be checked in connection with the performance of a virtual machine management operation such as a virtual machine deploy, resize or migration operation to enable the virtual machine management operation to be scheduled on a host for which no concurrency limits have been met. | 2017-05-18 |
20170139730 | COMPOSITE VIRTUAL MACHINE TEMPLATE FOR VIRTUALIZED COMPUTING ENVIRONMENT - Composite virtual machine templates may be used in the deployment of virtual machines into virtualized computing environments. A composite virtual machine template may define a plurality of deployment attributes for use in a virtual machine deployment, and at least some of these deployment attributes may be determined through references to other virtual machine templates and included in the composite virtual machine template. | 2017-05-18 |
20170139731 | OFFLINE TOOLS UPGRADE FOR VIRTUAL MACHINES - A method for managing tools on a virtual machine includes provisioning a virtual machine. The method also includes, before powering on the virtual machine, collecting a list of one or more tools on the virtual machine, and a version associated with each of the one or more tools. The method also includes determining if an upgrade is available for any of the one or more tools. Responsive to determining an upgrade for any of the one or more tools is available, the method includes retrieving a tool image comprising an upgraded version of the one or more tools on the provisioned virtual machine. The method further includes modifying a virtual disk file of the provisioned virtual machine using the tool image. | 2017-05-18 |
20170139732 | VIRTUAL MACHINE MIGRATION MANAGEMENT - Disclosed aspects manage virtual machine migration on a shared pool of configurable computing resources. A virtual machine is monitored in order to identify a set of migration data with respect to the virtual machine. A set of migration events is detected with respect to the virtual machine. Based on the set of migration events, the set of migration data is collected. In response to a triggering event, a determination is made whether to migrate the virtual machine from a current host based on the set of migration data. In accordance with the determination, a selection can be made whether to migrate the virtual machine from the current host. | 2017-05-18 |
20170139733 | MANAGEMENT OF A VIRTUAL MACHINE IN A VIRTUALIZED COMPUTING ENVIRONMENT BASED ON A CONCURRENCY LIMIT - One or more concurrency limits may be checked in connection with the performance of a virtual machine management operation such as a virtual machine deploy, resize or migration operation to enable the virtual machine management operation to be scheduled on a host for which no concurrency limits have been met. | 2017-05-18 |
20170139734 | COMPOSITE VIRTUAL MACHINE TEMPLATE FOR VIRTUALIZED COMPUTING ENVIRONMENT - Composite virtual machine templates may be used in the deployment of virtual machines into virtualized computing environments. A composite virtual machine template may define a plurality of deployment attributes for use in a virtual machine deployment, and at least some of these deployment attributes may be determined through references to other virtual machine templates and included in the composite virtual machine template. | 2017-05-18 |
20170139735 | VIRTUAL MACHINE COLLABORATIVE SCHEDULING - A method for operating a processing system comprising in a hypervisor, negotiating with a host platform to determine compatibility between a virtual machine and the host platform, responsive to determining that the virtual machine is compatible with the host platform, receiving a control block from the virtual machine, tagging the control block with information that associates the control block with a control group, determining whether the hypervisor is a base hypervisor, and scheduling the control block for processing responsive to determining that the hypervisor is the base hypervisor. | 2017-05-18 |
20170139736 | METHOD AND SYSTEM FOR UTILIZING SPARE CLOUD RESOURCES - A cloud computing system including a computing device configured to run virtual machine instances is disclosed. The computing device includes a hypervisor program for managing the virtual machine instances. A customer virtual machine instance is run by the hypervisor program on the computing device, and a grid virtual machine instance is run by the hypervisor program on the computing device. The grid virtual machine instance is configured to run only when a resource of the computing device is not being utilized by the customer virtual machine instance. | 2017-05-18 |
20170139737 | FULL VIRTUAL MACHINE FUNCTIONALITY - Full virtual machine (VM) functionality in one example implementation can include sending a complete initialization package to a location in memory of a machine accessible by a hypervisor and generating a VM capable of providing a respective full functionality of a hardware component in the machine. | 2017-05-18 |
20170139738 | METHOD AND APPARATUS FOR VIRTUAL DESKTOP SERVICE - Disclosed herein are a method and apparatus for virtual desktop service. The apparatus includes a connection manager configured to perform an assignment task of assigning a virtual machine to a user terminal using virtual desktop service, a resource pool configured to allocate software resources to a virtual desktop, wherein the software resources include an OS, applications, and user profiles, and a virtual machine infrastructure configured to support hardware resources including a CPU and a memory, wherein the connection manager is configured to perform a coordination task of coordinating a delivery protocol used between the user terminal and servers that provide the virtual desktop service, wherein the resource pool has a management function, wherein the management function is based on usage pattern information about a user's average usage of resources, and wherein the management function uses a physical distance on network from the user terminal to a server. | 2017-05-18 |
20170139739 | METHODS AND SYSTEMS FOR PROVISIONING A VIRTUAL RESOURCE IN A MIXED-USE SERVER - A method for provisioning a virtualized resource includes directing, by a provisioning machine, a server-executed hypervisor to provision a virtual machine. The provisioning machine directs generation of an organizational unit within a first organizational unit within a multi-tenant directory service separated from a second organizational unit in the multi-tenant directory service by a firewall. The provisioning machine associates the virtual machine with the first organizational unit. The provisioning machine establishes at least one firewall rule on the virtual machine restricting communications to the virtual machine to communications from explicitly authorized machines, which including at least one other machine within the organizational unit. The provisioning machine receives a request to provision a virtualized resource for at least one user. The provisioning machine updates data associated with the organizational unit to include an identification of the at least one user. The provisioning machine directs the virtual machine to host the virtualized resource. | 2017-05-18 |
20170139740 | Systems and Methods for Real Time Context Based Isolation and Virtualization - An embodiment method includes receiving, by an intellectual property (IP) block within a computing system, a transaction request and determining, by the IP block, a context corresponding to the transaction request. The method further includes determining, by the IP block, a view of the computing system defined by the context and processing, by the IP block, the transaction request in accordance with the view of the computing system defined by the context. | 2017-05-18 |
20170139741 | LEGACY APPLICATION MIGRATION TO REAL TIME, PARALLEL PERFORMANCE CLOUD - A system for operating a legacy software application is presented. The system includes a distributed processing service. A wrapper software object is configured both to receive processing requests to a legacy software application from outside the distributed processing service and to send the processing requests using the distributed processing service. Additionally, an encapsulated software object includes the legacy software application and an exoskeleton connection service. The exoskeleton connection service is both configured to accept processing requests from the distributed processing service, and mapped to an application programming interface of the legacy software application. | 2017-05-18 |
20170139742 | VIRTUAL MACHINE MIGRATION MANAGEMENT - Disclosed aspects manage virtual machine migration on a shared pool of configurable computing resources. A virtual machine is monitored in order to identify a set of migration data with respect to the virtual machine. A set of migration events is detected with respect to the virtual machine. Based on the set of migration events, the set of migration data is collected. In response to a triggering event, a determination is made whether to migrate the virtual machine from a current host based on the set of migration data. In accordance with the determination, a selection can be made whether to migrate the virtual machine from the current host. | 2017-05-18 |
20170139743 | VIRTUAL MACHINE MIGRATION TOOL - Tools and techniques for migrating applications to compute clouds are described herein. A tool may be used to migrate any arbitrary application to a specific implementation of a compute cloud. The tool may use a library of migration rules, apply the rules to a selected application, and in the process generate migration output. The migration output may be advisory information, revised code, patches, or the like. There may be different sets of rules for different cloud compute platforms, allowing the application to be migrated to different clouds. The rules may describe a wide range of application features and corresponding corrective actions for migrating the application. Rules may specify semantic behavior of the application, code or calls, storage, database instances, interactions with databases, operating systems hosting the application, and others. | 2017-05-18 |
20170139744 | SYSTEMS AND METHODS FOR FRAME PRESENTATION AND MODIFICATION IN A NETWORKING ENVIRONMENT - A data processing system can comprise a first module having a workspace and configured to execute a task that can request access to a frame in a system memory, a queue manager configured to store a frame descriptor which identifies the frame in the system memory, and a memory access engine coupled to the first module and the queue manager. The memory access engine copies requested segments of the frame to the workspace and has a working frame unit to store a segment handle identifying a location and size of each requested segment copied to the workspace of the first module. The memory access engine tracks history of a requested segment by updating the working frame unit when the requested segment in the workspace is modified by the executing task. | 2017-05-18 |
20170139745 | SCALING PRIORITY QUEUE FOR TASK SCHEDULING - In a computing system having a multiple central processing unit (CPU) cores the task scheduler can be configured to generate one or more priority value lists of elements, with each priority value list comprising elements having the same priority value. The priority queue of a task scheduler can be populated by links to priority value lists that are arranged in order of priority. Worker threads can access an input SIAO and determine the maximum priority of any element in the input SIAO. If the input SIAO has an element with higher priority than the priority queue of the task scheduler then the worker thread can cause the task associated with that element to be processed, otherwise the worker thread can cause all of the elements of the SIAO to be put into the priority value lists linked to by the elements in the priority queue. | 2017-05-18 |
20170139746 | PROCESSING DATA SETS IN A BIG DATA REPOSITORY - The invention provides for a method for processing a plurality of data sets ( | 2017-05-18 |
20170139747 | SCHEDULING MAPREDUCE TASKS BASED ON ESTIMATED WORKLOAD DISTRIBUTION - A method for scheduling MapReduce tasks includes receiving a set of task statistics corresponding to task execution within a MapReduce job, estimating a completion time for a set of tasks to be executed to provide an estimated completion time, calculating a soft decision point based on a convergence of a workload distribution corresponding to a set of executed tasks, calculating a hard decision point based on the estimated completion time for the set of tasks to be executed, determining a selected decision point based on the soft decision point and the hard decision point, and scheduling upcoming tasks for execution based on the selected decision point. The method may also include estimating a map task completion time and estimating a shuffle operation completion time. A computer program product and computer system corresponding to the method are also disclosed. | 2017-05-18 |
20170139748 | EFFICIENT PROCESSOR LOAD BALANCING USING PREDICATION - A system and methods embodying some aspects of the present embodiments for efficient load balancing using predication flags are provided. The load balancing system includes a first processing unit, a second processing unit, and a shared queue. The first processing unit is in communication with a first queue. The second processing unit is in communication with a second queue. The first and second queues are each configured to hold a packet. The shared queue is configured to maintain a work assignment, wherein the work assignment is to be processed by either the first or second processing unit. | 2017-05-18 |
20170139749 | SCHEDULING HOMOGENEOUS AND HETEROGENEOUS WORKLOADS WITH RUNTIME ELASTICITY IN A PARALLEL PROCESSING ENVIRONMENT - Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job. | 2017-05-18 |
20170139750 | INFORMATION PROCESSING APPARATUS AND COMPILATION METHOD - An apparatus includes one or more memories; and one or more processors configured to be coupled to the one or more memories, wherein the one or more processors are configured to generate, through compiling a source code, an object program, execute the object program as multiple processes generated by execution of the object program, allocate a first storage domain in the one or more memories for each of the multiple processes, allocate a variable in the first storage domain for each of the multiple processes, notify multiple processes other than own process of address information of a content of the variable for each of the multiple processes. | 2017-05-18 |
20170139751 | SCHEDULING METHOD AND PROCESSING DEVICE USING THE SAME - A scheduling method is provided. The method includes: recording a next instruction and a ready state of each thread group in a scoreboard; determining whether there is any ready thread group whose ready state is affirmative; determining whether a load/store unit is available, wherein the load/store unit is configured to access a data memory unit; when the load/store unit is available, determining whether the ready thread groups include a data access thread group, wherein the next instruction of the data access thread group is related to accessing the data memory unit; selecting a target thread group from the data access thread groups; and dispatching the target thread group to the load/store unit for execution. | 2017-05-18 |
20170139752 | SCHEDULING HOMOGENEOUS AND HETEROGENEOUS WORKLOADS WITH RUNTIME ELASTICITY IN A PARALLEL PROCESSING ENVIRONMENT - Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job. | 2017-05-18 |
20170139753 | SCHEDULING APPLICATION INSTANCES TO PROCESSOR CORES OVER CONSECUTIVE ALLOCATION PERIODS BASED ON APPLICATION REQUIREMENTS - Systems and methods provide a processing task load and type adaptive manycore processor architecture, enabling flexible and efficient information processing. The architecture enables executing time variable sets of information processing tasks of differing types on their assigned processing cores of matching types. This involves: for successive core allocation periods (CAPs), selecting specific processing tasks for execution on the cores of the manycore processor for a next CAP based at least in part on core capacity demand expressions associated with the processing tasks hosted on the processor, assigning the selected tasks for execution at cores of the processor for the next CAP so as to maximize the number of processor cores whose assigned tasks for the present and next CAP are associated with same core type, and reconfiguring the cores so that a type of each core in said array matches a type of its assigned task on the next CAP. | 2017-05-18 |
20170139754 | A MECHANISM FOR CONTROLED SERVER OVERALLOCATION IN A DATACENTER - A method of controlling a datacentre ( | 2017-05-18 |
20170139755 | EFFICIENT CHAINED POST-COPY VIRTUAL MACHINE MIGRATION - A hypervisor receives, from a second host at a third host, at a second time after a first time, a first plurality pages. The first plurality of pages were copied at the first time, from a first host to the second host. The hypervisor receives a mapping at the third host, sent from the second host. The mapping indicates a first location of a second plurality of pages and a second location of a third plurality of pages. The hypervisor detects a page fault at the third host. The page fault is associated with a required page that is absent from the third host. Responsive to detecting this, the hypervisor queries the mapping, to determine a source location of the required page and identifies a source host for the source location. The hypervisor receives the required page, from the source host at the third host. | 2017-05-18 |
20170139756 | PROGRAM PARALLELIZATION ON PROCEDURE LEVEL IN MULTIPROCESSOR SYSTEMS WITH LOGICALLY SHARED MEMORY - A data processing system includes:
| 2017-05-18 |
20170139757 | A DATA PROCESSING APPARATUS AND METHOD FOR PERFORMING LOCK-PROTECTED PROCESSING OPERATIONS FOR MULTIPLE THREADS - A data processing apparatus and method are provided for executing a plurality of threads. Processing circuitry performs processing operations required by the plurality of threads, the processing operations including a lock-protected processing operation with which a lock is associated, where the lock needs to be acquired before the processing circuitry performs the lock-protected processing operation. Baton maintenance circuitry is used to maintain a baton in association with the plurality of threads, the baton forming a proxy for the lock, and the baton maintenance circuitry being configured to allocate the baton between the threads. Via communication between the processing circuitry and the baton maintenance circuitry, once the lock has been acquired for one of the threads, the processing circuitry performs the lock-protected processing operation for multiple threads before the lock is released, with the baton maintenance circuitry identifying a current thread amongst the multiple threads for which the lock-protected processing operation is to be performed by allocating the baton to that current thread. The baton can hence be passed from one thread to the next, without needing to release and re-acquire the lock. This provides a significant performance improvement when performing lock-protected processing operations across multiple threads. | 2017-05-18 |
20170139758 | Nondeterministic Operation Execution Environment Utilizing Resource Registry - A resource registry provides nondeterministic operation environment affording flexible access for resource execution and status monitoring on the cloud. The resource registry service provides generic resource management utilizing registration, updating, and unregistration by resource providers. A requester for an operation may register in the resource registry, an operation resource having parameters defined in metadata. The resource registry notifies a registered resource listener of this registration of the operation resource. The resource listener may then execute the operation according to parameters defined in the operation resource. The resource listener returns a response to the resource registry, concerning a result of execution of the operation. The resource registry updates this status in the metadata of the operation resource. The requester is then able to look up the operation resource's metadata to determine current status of the operation. The nondeterministic operation environment desirably avoids direct coupling between operation requestor and operation executor APIs. | 2017-05-18 |
20170139759 | PATTERN ANALYTICS FOR REAL-TIME DETECTION OF KNOWN SIGNIFICANT PATTERN SIGNATURES - A method includes determining whether the first pattern has been marked as a known significant pattern by searching for the first pattern in a library of known significant patterns in a storage. The method also includes, in response to determining that the first pattern has been marked as a known significant pattern, determining whether the first pattern has a causal relationship with a second pattern and determining a strength of the causal relationship between the first pattern and the second pattern. The method also includes, based on the strength of the causal relationship, predicting whether the second pattern will occur, and, in response to predicting that the second pattern will occur, alerting a system administrator in real-time that the second pattern will occur. | 2017-05-18 |
20170139760 | DETECTING ANOMALOUS STATES OF MACHINES - The state of a system is determined in which data sets are generated that include a plurality of data instances representing states of one or more components of a computer system. The data instances generated by one or more data set sources that are configured to output a data instance in response to a trigger associated with the one or more components. The data instances are normalized by the application of one or more rules. The data instances from individual data set sources are separately collated to generate groups of time-specific collated data instances. State types may be assigned to each of the collated data instance groups. Distributions of state-types across the groups may be determined and a list of infrequent state-types may be generated based on the determined distributions of state-types across the groups. | 2017-05-18 |
20170139761 | Variable-Term Error Metrics Adjustment - Systems, methods and/or devices are used to adjust error metrics for a memory portion of non-volatile memory in a storage device. In one aspect, a first write and a first read are performed on the memory portion. In accordance with results of the first read, a first error metric value for the memory portion is determined. In accordance with a determination that the first error metric value exceeds a first threshold value, an entry for the memory portion is added to a table. After the first write, when a second write to the memory portion is performed, it is determined whether the entry for the memory portion is present in the table. In accordance with a determination that the entry for the memory portion is present in the table, the second write uses a first error adjustment characteristic that is determined in accordance with the first error metric value. | 2017-05-18 |
20170139762 | AUTOMATED TESTING ERROR ASSESSMENT SYSTEM - Methods and systems for automatically resolving computerized electronic communication anomalies are disclosed herein. The system can include a memory including an error database containing information identifying a plurality of previous detected errors and configuration information associated with those errors. The system can include a plurality of user devices. Each of these plurality of user devices can include: a first network interface to exchange data via the communication network; and a first I/O subsystem to convert electrical signals to user interpretable outputs via a user interface. The system can include a server that can: receive an indication of the initiation of electronic communication; receive an electrical signal including attribute information; receive an error message; identify a trend in error messages; and provide an error solution if a trend is identified. | 2017-05-18 |
20170139763 | CYBER PHYSICAL SYSTEM - A cyber physical system including at least one monitoring and safety device for monitoring various parameters of a machine with regard to the maintenance of setpoint values and for generating an error signal in the event of an error and a hard-wired interface to the Internet and a transmission and/or reception unit for transmitting and/or receiving data over the Internet, wherein the monitoring or safety device is connected to the transmission and/or reception unit for transmitting the error signal over the Internet. The hard-wired interface is connected to a controllable switch for physical disconnection and enabling of the connection between the cyber physical system and the Internet, and the cyber physical system has at least one control unit connected to the monitoring or control device for triggering the controllable switch for brief enabling of the connection between the cyber physical system and the Internet. | 2017-05-18 |
20170139764 | MULTIPLE PATH ERROR DATA COLLECTION IN A STORAGE MANAGEMENT SYSTEM - In one aspect, multiple data path error collection is provided in a storage management system. In one embodiment, an error condition in a main data path between the storage controller and at least one of a host and a storage unit is detected, and in response, a sequence of error data collection operations to collect error data through a main path is initiated. In response to a failure to collect error data at a level of the sequential error data collection operations, error data is collected through an alternate data path as a function of the error data collection level at which the failure occurred. Other aspects are described. | 2017-05-18 |
20170139765 | DATA LOGGER - A device includes a connector configured to be coupled to a storage device. The device also includes a controller coupled to the connector and powered via the connector. The controller is configured to receive log data from the storage device while powered via the connector. The controller is also configured to transmit the log data via a wireless interface to a remote device. | 2017-05-18 |
20170139766 | MANAGEMENT OF COMPUTING MACHINES WITH TROUBLESHOOTING PRIORITIZATION - A solution is proposed for managing a plurality of computing machines. A corresponding method comprises causing each computing machine of at least part of the computing machines to execute a management activity on the computing machine; receiving a corresponding result of the execution of the management activity on each computing machine; determining a success fingerprint according to one or more characteristics of each of at least part of one or more wherein the corresponding result is indicative of a success of the execution of the management activity thereon; calculating a similarity index for each of one or more failure ones of the computing machines wherein the corresponding result is indicative of a failure of the execution of the management activity thereon; and prioritizing the computing machines which failed to accept a policy according to the corresponding similarity indexes. | 2017-05-18 |
20170139767 | DYNAMICALLY DETECTING AND INTERRUPTING EXCESSIVE EXECUTION TIME - Systems, methods, and computer program products to perform an operation comprising storing, by a kernel and in a queue, an indication that a first process has called a second process, collecting process data for at least one of the first process and the second process, determining, by the kernel, that an amount of time that has elapsed since the first process called the second process exceeds a time threshold, storing the queue and the process data as part of a failure data capture, and performing a predefined operation on at least one of the first process and the second process. | 2017-05-18 |
20170139768 | SELECTIVELY DE-STRADDLING DATA PAGES IN NON-VOLATILE MEMORY - An apparatus, according to one embodiment, includes: one or more memory devices, each memory device comprising non-volatile memory configured to store data, and a memory controller connected to the one or more memory devices. The memory controller is configured to: detect at least one read of a logical page straddled across codewords, store an indication of a number of detected reads of the straddled logical page, and relocate the straddled logical page to a different physical location in response to the number of detected reads of the straddled logical page, wherein the logical page is written to the different physical location in a non-straddled manner. Other systems, methods, and computer program products are described in additional embodiments. | 2017-05-18 |
20170139769 | METHOD AND APPARATUS FOR ENCODING AND DECODING DATA IN MEMORY SYSTEM - A memory system includes a memory controller; and a memory device, the memory device including a memory cell array, the memory cell array including least a first memory page having a plurality of memory cells storing a plurality of stored bits, the memory controller being such that, the memory controller performs a first hard read operation on the first memory page to generate a plurality of read bits corresponding to the plurality of stored bits, and if the memory controller determines to change a value of one of a first group of bits, from among the plurality of read bits, the memory controller selects one of the first group of bits based on log likelihood ratio (LLR) values corresponding, respectively, to each of the first group of bits, and changes the value of the selected bit. | 2017-05-18 |
20170139770 | MEMORY DEVICE AND CORRECTION METHOD - A device is disclosed that includes a reference circuit, a readout circuit, and an error correction coding circuit. The reference circuit is configured to generate a reference signal. The readout circuit is configured to generate data values of second data according to the reference signal and first data. The error correction coding circuit is configured to reset the reference circuit when errors are occurred in all of the data values of the second data. | 2017-05-18 |
20170139771 | SEMICONDUCTOR MEMORY DEVICES, MEMORY SYSTEMS INCLUDING THE SAME AND METHODS OF OPERATING MEMORY SYSTEMS - A semiconductor memory device includes a memory cell array, an error correction circuit, an error log register and a control logic circuit. The memory cell array includes a plurality of memory bank arrays and each of the memory bank arrays includes a plurality of pages. The control logic circuit is configured to control the error correction circuit to perform an ECC decoding sequentially on some of the pages designated at least one access address for detecting at least one bit error, in response to a first command received from a memory controller. The control logic circuit performs an error logging operation to write page error information into the error log register and the page error information includes a number of error occurrence on each of the some pages determined from the detecting. | 2017-05-18 |
20170139772 | PROTECTING EMBEDDED NONVOLATILE MEMORY FROM INTERFERENCE - Electromagnetic compatibility (EMC) of a system-on-a-chip (SoC) is enhanced by encoding at least a subset of control signals before the control signals are transmitted over a bus (e.g., a bus internal to a SoC) from a controller to an embedded nonvolatile memory (NVM). The error-detection code used causes an EMC event to introduce errors into the transmitted codewords with relatively high probability. In response to an error being detected in the transmitted codeword, a set of safeguarding operations are performed to prevent the data stored in the NVM from being uncontrollably changed. | 2017-05-18 |
20170139773 | Systems and Methods for Managing Address-Mapping Data in Memory Devices - Methods, apparatuses, and data storage devices are provided. Address-mapping data is compressed. The address-mapping data indicates mapping from a logical address to a physical address of a non-volatile memory of a storage device. Error checking and correction (ECC) data for the compressed address-mapping data is generated. The compressed address-mapping data and the ECC data are stored in the storage device. | 2017-05-18 |
20170139774 | CORRECTION APPARATUS AND CORRECTION METHOD - According to an embodiment, a correction apparatus includes an acquisition unit and a detector. The acquisition unit acquires a plurality of entries each including a plurality of elements. The detector extracts, from the plurality of entries, a plurality of second entries each having a second element which is common to a second element of a first entry, the first entry being an entry selected from the plurality of entries, the second element of the first entry being an entry other than a first element of the first entry, the first element of the first entry being an element selected from elements included in the first entry, and detects whether or not the first element of the first entry is a correction target based on first elements of the second entries. | 2017-05-18 |
20170139775 | METHOD AND APPARATUS FOR DISTRIBUTED STORAGE INTEGRITY PROCESSING - A distributed storage integrity system in a dispersed storage network includes a scanning agent and a control unit. The scanning agent identifies an encoded data slice that requires rebuilding, wherein the encoded data slice is one of a plurality of encoded data slices generated from a data segment using an error encoding dispersal function. The control unit retrieves at least a number T of encoded data slices needed to reconstruct the data segment based on the error encoding dispersal function. The control unit is operable to reconstruct the data segment from at least the number T of the encoded data slices and generate a rebuilt encoded data slice from the reconstructed data segment. The scanning agent is located in a storage unit and the control unit is located in the storage unit or in a storage integrity processing unit, a dispersed storage processing unit or a dispersed storage managing unit. | 2017-05-18 |
20170139776 | FAILURE MAPPING IN A STORAGE ARRAY - A storage cluster is provided. The storage cluster includes a plurality of storage nodes within a chassis. The plurality of storage nodes has flash memory for storage of user data and is configured to distribute the user data and metadata throughout the plurality of storage nodes such that the storage nodes can access the user data with a failure of two of the plurality of storage nodes. Each of the storage nodes is configured to generate at least one address translation table that maps around defects in the flash memory on one of a per flash package basis, per flash die basis, per flash plane basis, per flash block basis, per flash page basis, or per physical address basis. Each of the plurality of storage nodes is configured to apply the at least one address translation table to write and read accesses of the user data. | 2017-05-18 |
20170139777 | SYSTEMS AND METHODS FOR VIRTUALIZATION BASED SECURE DEVICE RECOVERY - Systems, methods, and/or techniques for performing device recovery using a device management agent (DMAG) on a device may be provided. The DMAG may be in secure execution environment that may be protected by a hypervisor and/or may include or have a full network stack (e.g., via a tiny operating system associated therewith). The DMAG or other entity on the device may receive control of the device and/or may determine or detect whether an application and/or an operating system on the device may not be in a normal service. The DMAG or other entity may initiate a secure session with a DMS based on the application and/or operating system not being in the normal service such that the DMS may determine whether the device may have a potential software problem. The DMAG or other entity may set up or establish a recovery and/or upgrade session based on the device having the potential software problem (e.g., using the secure session) and/or may receive a software image to do a re-flash of the operating system and/or the application. The DMAG or other entity may send a re-bot request command such that the device may be re-booted (e.g., to get back into the normal service). | 2017-05-18 |
20170139778 | RELAY APPARATUS, RELAY METHOD, AND COMPUTER PROGRAM PRODUCT - A relay apparatus according to an embodiment includes a request transmitting unit, a response receiving unit, and a data transmitting unit. The request transmitting unit transmits an acquisition request to a provision apparatus. The response receiving unit receives the second data from the provision apparatus in response to the acquisition request. The data transmitting unit transmits, to the electronic control unit, the second data received from the provision apparatus, thereby causing the electronic control unit to update the first data firstly stored therein with the second data. When the updating has failed, the data transmitting transmits the first data to the electronic control unit, thereby causing the electronic control unit to restore the first data. | 2017-05-18 |
20170139779 | CONTROLLER, STORAGE SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM HAVING CONTROL PROGRAM STORED THEREIN - A controller for a second storing device is disclosed. The controller is adapted to restore a first logical volume provided in a first storing device, into a second logical volume. The controller comprises a memory and a processor. The processor that executes a process comprises: obtaining specific information for identifying the first logical volume being a volume to be restored, based on restore processing data comprising data of the first logical volume and first setting information about the first storing device, with reference to a shared directory structure in the first storing device; extracting, from the first setting information, second setting information associated with the first logical volume, based on the obtained specific information; and generating third setting information about the second storing device, based on the extracted second setting information. As a result, data in a logical volume can be easily restore. | 2017-05-18 |
20170139780 | DATA RECOVERY OPERATIONS, SUCH AS RECOVERY FROM MODIFIED NETWORK DATA MANAGEMENT PROTOCOL DATA - The systems and methods herein permit storage systems to correctly perform data recovery, such as direct access recovery, of Network Data Management Protocol (“NDMP”) backup data that was modified prior to being stored in secondary storage media, such as tape. For example, as described in greater detail herein, the systems and methods may permit NDMP backup data to be encrypted, compressed, deduplicated, and/or otherwise modified prior to storage. The systems and methods herein also permit a user to perform a precautionary snapshot of the current state of data (e.g., primary data) prior to reverting data to a previous state using point-in-time data. | 2017-05-18 |
20170139781 | LOGICAL TO PHYSICAL TABLE RESTORATION FROM STORED JOURNAL ENTRIES - A controller-implemented method, according to one embodiment, includes: restoring a valid snapshot of a LPT from the non-volatile random access memory, examining each journal entry from at least one journal beginning with a most recent one of the journal entries in a most recent one of the at least one journal and working towards an oldest one of the journal entries in an oldest one of the at least one journal, the journal entries corresponding to updates made to one or more entries of the LPT, determining whether a current LPT entry which corresponds to a currently examined journal entry has already been updated, using the currently examined journal entry to update the current LPT entry in response to determining that the current LPT entry has not already been updated, and discarding the currently examined journal entry in response to determining that the current LPT entry has already been updated. | 2017-05-18 |
20170139782 | RECREATING A COMPUTING ENVIRONMENT USING TAGS AND SNAPSHOTS - A processing device receives a request to recreate an application from a particular point in time and determines a set of tags in a data store of hierarchical tags. The set of tags describe a computing environment hosting the application from the particular point in time. The hierarchical tags in the data store are created in response to a change to parameters of the computing environment. The processing device copies a snapshot from the data store to a replication data store, the snapshot of the computing environment being associated with a source data tag of the set of tags. The processing device recreates the computing environment hosting the application from the particular point in time in a replication environment using the set of tags and the snapshot stored in the replication data store. | 2017-05-18 |
20170139783 | METHOD AND APPARATUS FOR RECOVERY OF FILE SYSTEM USING METADATA AND DATA CLUSTER - A method and an apparatus for recovery of a file system using metadata and data clusters. The apparatus for recovery of a file system generates an MFT entry list in a disc or an evidence image, collects at least one data cluster candidate, and uses at least one MFT entry and at least one data cluster candidate within the MFT entry list to generate at least one MFT entry-data cluster pair candidate. The apparatus for recovery of a file system analyzes the at least one MFT entry-data cluster pair candidate to determine attribute values of a virtual partition and generate the virtual partition based on the attribute values. | 2017-05-18 |
20170139784 | Data Storage Devices and Data Maintenance Methods - A data storage device is provided. The data storage device includes a flash memory and a controller. The flash memory includes a plurality of blocks. Each block includes a plurality of pages. when the data storage device is resumed from a power-off event, the controller selects a first block which was written last before the power-off event among the plurality of blocks and writes data of a plurality of first pages of the first block into a plurality of second pages of the first block. | 2017-05-18 |
20170139785 | FILE SYSTEM FOR ROLLING BACK DATA ON TAPE - Rolling back data on tape in a file system is provided. A management tape is prepared. The management tape has only index files recorded thereon. The index files contain information about start positions and lengths of corresponding data files recorded on normal tapes. The index files further contain identification information for the normal tapes. A first index file of the management tape is read. The first index file is related to a data file to be rolled back. The first index file is read out from the management tape mounted on a first tape drive. The data file to be rolled back is read out of a first normal tape. The first normal tape is identified based on information in the first index file. The first normal tape is mounted on a second tape drive. | 2017-05-18 |