22nd week of 2017 patent applcation highlights part 33 |
Patent application number | Title | Published |
20170153889 | TRACKING CHANGES WITHIN JAVASCRIPT OBJECT NOTATION - The method includes adjusting, by one or more computer processors, a Javascript object notation structure to comprise a tag on at least one object and a tag on at least one array. The method further includes receiving, by one or more computer processors, data indicating a first set of at least one change to the Javascript object notation structure. The method further includes adjusting, by one or more computer processors, the tags in the Javascript object notation structure to include the first set of the at least one change in the Javascript object notation structure. The method further includes receiving, by one or more computer processor, data indicating the first set of the at least one change to the Javascript object notation structure is complete. The method further includes displaying the first set of the at least one change to the Javascript object notation structure based upon the adjusted tags. | 2017-06-01 |
20170153890 | HIGHLY INTEGRATED SCALABLE, FLEXIBLE DSP MEGAMODULE ARCHITECTURE - This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error. | 2017-06-01 |
20170153891 | DATA PROCESSING APPARATUS AND METHOD - A data processing apparatus comprises a processing circuit and instruction decoder. A bitfield manipulation instruction controls the processing apparatus to generate at least one result data element from corresponding first and second source data elements. Each result data element includes a portion corresponding to a bitfield of the corresponding first source data element. Bits of the result data element that are more significant than the inserted bitfield have a prefix value that is selected, based on a control value specified by the instruction, as one of a first prefix value having a zero value, a second prefix value having the value of a portion of the corresponding second source data element, and a third prefix value corresponding to a sign extension of the bitfield of the first source data element. | 2017-06-01 |
20170153892 | Instruction And Logic For Programmable Fabric Hierarchy And Cache - In one embodiment, a processor includes: a first core to execute instructions; and a programmable fabric having a hierarchical arrangement including a first layer of programmable fabric and a second layer of programmable fabric. The programmable fabric may include a fabric interface controller to: receive a first programmable fabric control instruction from the first core; and responsive thereto, cause a first programmable fabric unit of the first layer of programmable fabric to execute an operation on first input data. Other embodiments are described and claimed. | 2017-06-01 |
20170153893 | METHOD FOR MANAGING TASKS IN A COMPUTER SYSTEM - A method for managing tasks in a computer system comprising a processor and a memory, the method includes performing a first task by the processor, the first task comprising task-relating branch instructions and task-independent branch instructions and executing the branch prediction method, the execution resulting in task-relating branch prediction data in the branch prediction history table. In response to determining that the first task is to be interrupted or terminated, the method includes storing the task-relating branch prediction data of the first task in the task structure of the first task. In response to determining that a second task is to be continued, the method includes reading task-relating branch prediction data of the second task from the task structure of the second task, storing the task-relating branch prediction data of the second task in the branch prediction history table, and ensuring that task-independent branch prediction data is maintained. | 2017-06-01 |
20170153894 | APPARATUS AND METHOD FOR BRANCH PREDICTION - An apparatus which produces branch predictions and a method of operating such an apparatus are provided. A branch target storage used to store entries comprising indications of branch instruction source addresses and indications of branch instruction target addresses is further used to store bias weights. A history storage stores history-based weights for the branch instruction source addresses and a history-based weight is dependent on whether a branch to a branch instruction target address from a branch instruction source address has previously been taken for at least one previous encounter with the branch instruction source address. Prediction generation circuitry receives the bias weight and the history-based weight of the branch instruction source address and generates either a taken prediction or a not-taken prediction for the branch. The reuse of the branch target storage to bias weights reduces the total storage required and the matching of entire source addresses avoids problems related to aliasing. | 2017-06-01 |
20170153895 | METHOD FOR MANAGING TASKS IN A COMPUTER SYSTEM - A method for managing tasks in a computer system comprising a processor and a memory, the method includes performing a first task by the processor, the first task comprising task-relating branch instructions and task-independent branch instructions and executing the branch prediction method, the execution resulting in task-relating branch prediction data in the branch prediction history table. In response to determining that the first task is to be interrupted or terminated, the method includes storing the task-relating branch prediction data of the first task in the task structure of the first task. In response to determining that a second task is to be continued, the method includes reading task-relating branch prediction data of the second task from the task structure of the second task, storing the task-relating branch prediction data of the second task in the branch prediction history table, and ensuring that task-independent branch prediction data is maintained. | 2017-06-01 |
20170153896 | Instruction And Logic For In-Order Handling In An Out-Of-Order Processor - In one embodiment, a processor includes a decode logic, an issue logic to issue decoded instructions, and at least one execution logic to execute issued instructions of a program. The at least one execution logic is to execute at least some instructions of the program out-of-order, and the decode logic is to decode and provide a first in-order memory instruction of the program to the issue logic. In turn, the issue logic is to order the first in-order memory instruction ahead of a second in-order memory instruction of the program. Other embodiments are described and claimed. | 2017-06-01 |
20170153897 | LIGHTWEIGHT INTERRUPTS FOR FLOATING POINT EXCEPTIONS - Embodiments relate to lightweight interrupts for floating point exceptions. An aspect includes, based on an exception occurring in a floating point unit of a processor during execution of an application, sending a lightweight interrupt corresponding to the exception to the application; and handling the exception by an exception handler of the application. | 2017-06-01 |
20170153898 | REBOOT SYSTEM AND REBOOT METHOD - A reboot system includes a control panel for operating an apparatus including a control panel controller, a main controller that communicates with the control panel controller of the control panel to control the apparatus, and a sub-controller that controls a power supply of the apparatus. The control panel controller monitors communication between the main controller and the control panel controller to detect a communication failure, and notifies the sub-controller of the communication failure between the main controller and the control panel controller. The sub-controller requests the power supply to reboot when the notification from the control panel controller indicating the communication failure is received. | 2017-06-01 |
20170153899 | FAST-BOOTING APPLICATION IMAGE - Execution of an executable portion of an application source executing in a first computer instance is monitored at least up to a point relative to a variation point. The execution is halted at the point. An application image of the first computer instance usable to instantiate a second computer instance is copied based at least in part on the variation point such that the second computer instance continues execution of the executable portion of the application source from the variation point, and the application image is caused to be stored. | 2017-06-01 |
20170153900 | CLOUD COMPUTING OPERATING SYSTEM AND METHOD - A cloud computing operating system is described. The system, in one aspect, includes a plurality of code encapsulating data structures each configured to define executable code and to define the structure of one or more encapsulating data structures. The executable code is configured to instantiate one or more of the encapsulating data structures and to perform runtime operations on the encapsulating data structures. And the plurality of code encapsulating data structures form an inheritance hierarchy, each code encapsulating data structure being an encapsulating data structure itself and each encapsulating data structure instantiated by an associated code encapsulating data structure. Other aspects of the cloud computing operating system are also described. | 2017-06-01 |
20170153901 | SYSTEM FILE MANAGEMENT ON A STORAGE DEVICE - A method or system comprises reading content of a plurality of system files from storage media of a storage device, generating a master storage device system file, and storing the master storage device system file on the storage media at a master system file location. The location of the master system file is provided to boot firmware or hardware. As a result, when the system boots up, the master system file is read into a temporary cache. | 2017-06-01 |
20170153902 | SYSTEM SUSPENDING METHOD, SYSTEM RESUMING METHOD AND COMPUTER SYSTEM USING THE SAME - A system suspending method, a system resuming method and a computer system using the same are provided. The system resuming method of the computer system is applied for resuming the computer system to be a normal status (S0 status) from a suspend-to-ram status (S3 status) or a suspend-to-disk status (S4 status). The computer system includes a plurality of peripheral devices and a central processing unit. The peripheral devices are classified into a first group and a second group. The system resuming method includes the following steps. The central processing unit is powered on. Then, the peripheral devices belonging in the first group are resumed. Next, the computer system is thawed. | 2017-06-01 |
20170153903 | COMPUTERIZED SYSTEM AND METHOD FOR ANALYZING USER INTERACTIONS WITH DIGITAL CONTENT AND PROVIDING AN OPTIMIZED CONTENT PRESENTATION OF SUCH DIGITAL CONTENT - Disclosed herein is a statistical approach, a win share approach, used to assign a win share value to content items. User interaction with content items is tracked, and a win share value is assigned to content items in response to a “winning” action performed by a user. Win shares associated with content items are used to identify content items that are to be presented, and can further be used to identify an optimal presentation, e.g., layout, presentation frequency, etc., of content items that is to be presented. | 2017-06-01 |
20170153904 | ALERT DASHBOARD SYSTEM WITH SITUATION ROOM - A user interface system includes a first engine configured to receive message data from managed infrastructure that includes managed infrastructure physical hardware that supports the flow and processing of information. A second engine determines common characteristics of events and produces clusters of events relating to the failure of errors in the managed infrastructure, where membership in a cluster indicates a common factor of the events that is a failure or an actionable problem in the physical hardware managed infrastructure directed to supporting the flow and processing of information. One or more situations is created that is a collection of one or more events or alerts representative of the actionable problem in the managed infrastructure. A situation room includes a user interface (UI) for decomposing events from managed infrastructures. In response to production of the clusters one or more physical changes in a managed infrastructure hardware is made, where the hardware supports the flow and processing of information. | 2017-06-01 |
20170153905 | USER QUEST-ANCHORED ACTIVE DIGITAL MEMORY ASSISTANT - Systems, methods, and computer-readable storage media are provided for organizing information pertaining to entity quests in which a user is engaged in an easily retrievable and viewable manner. An active digital memory assistant on a user computing device may automatically detect and organize activity, taken by a particular user and centered on a single user intent, into an entity list. Information comprising a relevant entity list may be proactively surfaced to the user when the user is performing a task for which a related entity list exists. Alternatively, the user may manually invoke the active digital memory assistant (e.g., via selection of an appropriate icon or tile on the user's desktop) to show his or her entity related activity in the form of content previously extracted and actions previously taken. | 2017-06-01 |
20170153906 | VIRTUAL MACHINE RESOURCE ALLOCATION BASED ON USER FEEDBACK - A computer system may receive two or more messages. Each message may be sent by a user of one of a plurality of virtual machines that are executing on a host machine. Each message may request an adjustment of resource entitlements for the virtual machine. The computer system may aggregate the two or more messages. The computer system may determine whether a particular resource template type associated with at least one of the two or messages should be adjusted based on the aggregated messages. | 2017-06-01 |
20170153907 | Out-of-band Management Of Virtual Machines - A management module of a managed computer is accessed via an out-or-band network. The management module communicates via a hypervisor with guest operating systems running in virtual machines on said managed computer. | 2017-06-01 |
20170153908 | METHOD AND APPARATUS FOR PROVIDING OPERATING SYSTEM BASED ON LIGHTWEIGHT HYPERVISOR - A method and apparatus for providing an operating system based on a lightweight hypervisor. An electronic device includes a hypervisor, an operating system monitor, and a virtualized operating system. The hypervisor enables the virtualized operating system and a physical machine to share the resources of the physical machine. If the virtualized operating system accesses the resource, the operating system monitor determines whether to allow the access to the resource. Also, the operating system monitor verifies the integrity of the virtualized operating system and determines whether a threat to the virtualized operating system exists. | 2017-06-01 |
20170153909 | Methods and Devices for Acquiring Data Using Virtual Machine and Host Machine - A method of acquiring data using a virtual machine, a method of acquiring data using a host machine, a system for accessing cloud data, and an electronic device thereof. The method of acquiring data using a virtual machine may include acquiring directory information of files that is stored in a cloud server. The virtual machine may further receive a selection operation of the files that is shown in the directory information and generate an acquisition request for data corresponding to the selection operation in the files. Further, the virtual machine may place the request in the buffer and receive return data corresponding to the selection operation from a host machine. In some implementations, the host machine may download data requested by the virtual machine and then provide the data to the virtual machine by sharing memory with the virtual machine. | 2017-06-01 |
20170153910 | SYSTEM AND METHOD FOR SUPPORTING TRANSACTION AFFINITY BASED ON RESOURCE MANAGER (RM) INSTANCE AWARENESS IN A TRANSACTIONAL ENVIRONMENT - A system and method can support transaction processing in a transactional environment. A transactional system operates to route a request to a transactional server, wherein the transactional server is connected to a resource manager (RM) instance. Furthermore, the transactional system can assign an affinity context to the transactional server, wherein the affinity context indicates the RM instance that the transactional server is associated with, and the transactional system can route one or more subsequent requests that are related to the request to the transactional server based on the affinity context. | 2017-06-01 |
20170153911 | DISTRIBUTED TRANSACTIONS ON MOBILE DEVICES VIA A MESSAGING SERVICE PROVIDED BY A MOBILE NETWORK OPERATOR - A method includes receiving, by a mobile device associated with a distributed transaction, a message via a messaging service provided by a mobile network operator. The method further includes determining, by a content based router of the mobile device, that the message is associated with the distributed transaction by determining that the message includes a transaction identifier that corresponds with an entry in a transaction table of the content based router. The entry identifies the distributed transaction and a destination of where to forward the message. The method further includes forwarding, by a processing device of the mobile device, the message to a resource manager resident on the mobile device. The resource manager corresponds to the destination of where to forward the message. The method further includes performing, by the resource manager, an action associated with the distributed transaction in view of contents of the message. | 2017-06-01 |
20170153912 | STORAGE MEDIUM AND ELECTRONIC DEVICE - A job for which execution is requested is classified as one of classes. The amount of data to be written into a non-volatile storage device by execution of the job for which execution is requested is acquired. The efficiency index is calculated for each of the classes based on an execution evaluation value of the class and the amount of data to be written into the non-volatile storage device by execution of at least one job that has been already classified as the class. From among the classes, a class having an efficiency index of no greater than an efficiency threshold value is determined as the execution suspending class. When the job for which execution is requested belongs to the execution suspending class, execution of the job is suspended. | 2017-06-01 |
20170153913 | EXECUTION CONTROL DEVICE THAT CAUSES OTHER ELECTRONIC DEVICE TO EXECUTE TASK, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM THAT INCLUDES EXECUTION CONTROL PROGRAM, AND TASK EXECUTION SYSTEM - An execution control device as an electronic device causes a task starting system to execute a task. The task starting system is provided in a task starting device and starts tasks in turn. The task starting device is another electronic device. The execution control device includes a control section and a storage section. The control section governs over all operation controls of the execution control device. The storage section stores an execution control program designed for causing the task starting system to execute the task. The control section operates as a task registration section that registers with the task starting system the task executable by the task starting system by operating the execution control program stored in the storage section. | 2017-06-01 |
20170153914 | DISTRIBUTED DATA SET TASK SELECTION - An apparatus may include a processor and storage to store instructions that cause the processor to perform operations including: generate a current data set model descriptive of a characteristic of a current data set; compare the current data set model to at least one previously generated data set model descriptive of a characteristic of a previously analyzed data set; in response to detection of a match within a similarity threshold: retrieve an indication from a correlation database of an action previously performed on a previously analyzed data set; select a computer language based on node data descriptive of characteristics of a node device execution environment; generate node instructions in the selected computer language and based on the current data set model to cause the node device to perform the previously performed action on a portion of the current data set; and transmit the node instructions to the node device. | 2017-06-01 |
20170153915 | SELECTING PARTIAL TASK RESOURCES IN A DISPERSED STORAGE NETWORK - A method for use in a dispersed storage network operates to identify a selected DSTE unit of a subset of DSTE units to perform one or more partial tasks of a task to be performed on at least one encoded data slice; issue the one or more partial tasks to the selected DSTE unit; receive one or more partial results from the selected DSTE unit, wherein the selected DSTE unit performs the one or more partial tasks on the at least one encoded data slice to produce the one or more partial results; and output a result based on the received one or more partial task. | 2017-06-01 |
20170153916 | VOLTAGE DROOP MITIGATION IN 3D CHIP SYSTEM - The present invention relates to a multichip system and a method for scheduling threads in 3D stacked chip. The multichip system comprises a plurality of dies stacked vertically and electrically coupled together; each of the plurality of dies comprising one or more cores, each of the plurality of dies further comprising: at least one voltage violation sensing unit, the at least one voltage violation sensing unit being connected with the one or more cores of each die, the at least one voltage sensing unit being configured to independently sense voltage violation in each core of each die; and at least one frequency tuning unit, the at least one frequency tuning unit being configured to tune the frequency of each core of each die, the at least one frequency tuning unit being connected with the at least one voltage violation sensing unit. The multichip system and method described in present invention have many advantages, such as reducing voltage violation, mitigating voltage droop and saving power. | 2017-06-01 |
20170153917 | ACCOUNT ACTIVITY LEVEL BASED-SYSTEM RESOURCE ALLOCATING METHOD AND DEVICE - The present disclosure discloses system resource allocating method and device based on account activity level, wherein the method includes: acquiring an account activity level parameter of a user and calculating an account activity level of each user according to the account activity level parameter of the user; determining an account activity level rank of each user according to the account activity level of the user and a preset account activity level rank dividing manner; establishing an account activity level index of each user according to a user number, the account activity level and the account activity level rank of the user; allocating the system resource for performing the information processing to a target user according to the account activity level index of the target user, where the information processing is to be performed on the target user. | 2017-06-01 |
20170153918 | SYSTEM AND METHOD FOR RESOURCE MANAGEMENT - Methods and systems of managing a resource in a distributed resource management system can include: receiving a resource request by at least one processor in the distributed resource management system, the resource request identifying a requested resource type corresponding to at least one of: a class identifier identifying a resource class assigned to a composite resource, and a class identifier identifying at least one additional resource associated with the composite resource; determining availability of the requested resource type; and scheduling a workload associated with the resource request for execution based on the determination. | 2017-06-01 |
20170153919 | SYSTEM FOR ANALYZING RESOURCE CAPACITY BASED ON ASSOCIATED DEPENDENCIES - Systems, computer program products, and methods are described herein for analyzing resource capacity based on associated dependencies. The present invention is configured to determine a resource capacity associated with an entity; determine one or more dependencies associated with the entity, wherein the one or more dependencies are associated with a resource value; receive a user input, wherein the user input comprises a dynamic allocation of the one or more dependencies to the entity; determine an aggregated resource value based on at least receiving the user input, wherein the aggregated resource value comprises an aggregate of the resource values associated with the one or more dependencies dynamically allocated to the entity; and initiate a presentation of a dynamic display, wherein the dynamic display comprises an indication of the resource capacity associated with the entity and an aggregated resource value associated with the one or more dependencies dynamically allocated to the entity. | 2017-06-01 |
20170153920 | RECRUITING ADDITIONAL RESOURCE FOR HPC SIMULATION - Graphics processing units (CPUs) deployed in general purpose GPU (GPGPU) units are combined into a GPGPU cluster. Access to the remote GPGPU cluster is then offered as a service to users who can use their own computers to communicate with the GPGPU cluster. The users' computers can be standalone desktop systems, laptops, or even another GPGPU cluster. The user can run a parallelized application locally and patiently wait for results or can dynamically recruit the remote GPGPU cluster to obtain those results more quickly. Dynamic recruitment means that the users can add remote GPGPU resources to a running application. | 2017-06-01 |
20170153921 | SYSTEM AND METHOD OF MANAGING CONTEXT-AWARE RESOURCE HOTPLUG - A resource hotplug managing method of a computing system includes accessing scenario data including a plurality of scenarios, evaluating the plurality of scenarios using context information about the computing system, and controlling hotplug-in or hotplug-out of a resource included in the computing system according to a satisfied scenario among the plurality of scenarios. | 2017-06-01 |
20170153922 | SIMULTANEOUS MULTITHREADING RESOURCE SHARING - A computer system may determine a mode for a processor. The processor may support SMT, and it may have a first hardware thread with a first architected resource and a second hardware thread with a second architected resource. The computer system may determine that the processor is in a reduced-thread mode. The computer system may determine that the first hardware thread is a primary hardware thread that is active in the reduced-thread mode, and that the second hardware thread is a secondary hardware thread that is inactive in the reduced-thread mode. The computer system may disable the second hardware thread. The computer system may enable the first hardware thread to access the second architected resources. | 2017-06-01 |
20170153923 | DATA PROCESSING SYSTEM FOR EFFECTIVELY MANAGING SHARED RESOURCES - A data processing system including a shared resource, a first data processing device configured to generate a first resource request signal requesting the shared resource, a second data processing device configured to generate a second resource request signal requesting the shared resource, and a resource manager master configured to receive the first resource request signal and the second resource request signal, check a state of the shared resource, determine whether the first resource request signal or the second resource request signal is received earlier, and output a grant signal to the first data processing devices and a rejection signal to the second data processing device when the first resource request signal is received earlier than the second resource request signal. The first data processing device processes data using the shared resource according to the grant signal. | 2017-06-01 |
20170153924 | METHOD FOR REQUEST SCHEDULING AND SCHEDULING DEVICE - A method for request scheduling and a scheduling device are provided. The method includes the following steps. The utilization rate of a processing unit is monitored. Multiple periodic requests are received, where the i | 2017-06-01 |
20170153925 | NETWORK - The present invention provides a method which can be used to optimise the delivery of series over communications networks. Tasks which need to executed within a short timescale and those which are not due to be executed for a long time are excluded from the optimisation process. A score is determined, using fuzzy logic, for each task and its related resources and for each resource and its related tasks. This score is then used to determined which tasks should be optimised. | 2017-06-01 |
20170153926 | OPTIMIZING COMPUTER HARDWARE RESOURCE UTILIZATION WHEN PROCESSING VARIABLE PRECISION DATA - Systems and methods for optimizing hardware resource utilization when processing variable-precision data are provided. Application data objects are processed using either a central processing unit (CPU) or the relatively lower precision data processing requirements of a dedicated math processing unit, e.g., a graphics processing unit (GPU), based on a level of precision determined for each application data object. The level of precision is used to calculate at least one bounding value for each application data object. The bounding value is compared to a selected precision threshold in order to determine whether the application data to object can be processed by the GPU at a relatively lower level of precision without an undesirable loss of computational precision. | 2017-06-01 |
20170153927 | SYSTEM AND METHOD FOR RUNTIME GROUPING OF PROCESSING ELEMENTS IN STREAMING APPLICATIONS - A method, computer program product, and computer system for dynamically grouping and un-grouping processing operators and processing elements used by a streaming application. A distributed processing elements utilization of resources may be monitored to identify candidate operators and candidate processing elements for at least one of parallelization and fusion. At runtime, via at least one of parallelization and fusion, the grouping and un-grouping of the identified candidate operators and candidate processing elements may be dynamically adjusted. | 2017-06-01 |
20170153928 | COEXISTENCE OF MESSAGE-PASSING-LIKE ALGORITHMS AND PROCEDURAL CODING - First logical cores supported on physical processor cores in a computing system can be designated for execution of message-passing workers of a plurality of message workers while at least second logical cores supported on the physical processor cores can be designated for execution of procedural code such that resources of a physical processor core supporting the first logical core and the second logical core are shared between a first logical core and a second logical core. A database object in a repository can be assigned to one message-passing worker, which can execute operations on the database object while procedurally coded operations are processed using the second logical core on one or more of the plurality of physical processor cores while the first logical core executes the message-passing worker. | 2017-06-01 |
20170153929 | SYSTEM AND METHOD FOR PROVIDING ADDITIONAL FUNCTIONALITY TO EXISTING SOFTWARE IN AN INTEGRATED MANNER - An improved system and method are disclosed for improving functionality in software applications. In one example, the method includes a computing entity having a network interface, a processor, and a memory configured to store a plurality of instructions. The instructions include instructions for a superblock application having instructions for a function block included therein. The function block is configured to provide functions that are accessible to the superblock application via an application programming interface (API). The functions are provided within the superblock application itself and are accessible within the superblock application without switching context to another application on the computing entity. | 2017-06-01 |
20170153930 | APPLICATION CONTAINER RUNTIME - Disclosed is an application container runtime (“ACR”) designed to integrate with existing operating system components. The ACR is designed for minimal system resource drain while providing a number of options for security/system access privileges of applications running including container and virtual machine security levels. The ACR integrates with existing operating system daemon processes and does not include a centralized daemon process internally. | 2017-06-01 |
20170153931 | NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM, COMMUNICATION ALGORITHM DETERMINATION METHOD, AND INFORMATION PROCESSING DEVICE - An information processing device in a parallel computer system, the information processing device includes a processor. The process is configured to execute a process including performing first communication having a message length within a specific range according to communication algorithms different from each other, and measuring communication speeds according to the respective communication algorithms, determining a procedure of the communication algorithms in performing second communication on the basis of the measured communication speeds, the second communication having a message length within a range of the message length that is longer than the message length of the first communication, performing the second communication according to the determined procedure, and measuring the communication speeds according to the respective communication algorithms, and determining the communication algorithm according to the message length on the basis of measurement results in the first communication and the measurement results in the second communication. | 2017-06-01 |
20170153932 | ADAPTING LEGACY ENDPOINTS TO MODERN APIS - Example methods and systems are directed to adapting legacy endpoints to modem application protocol interfaces (APIs). A legacy endpoint may provide a powerful and complex API. A modern application may desire access to the legacy endpoint. One or more layers may be added between the modem application and the legacy endpoint. Each layer may provide a different API. These layers of APIs may transform the interface from a powerful and complex interface to a more limited but simpler and easier to use interface. In some example embodiments, a proxy layer, an adapter layer, a facade layer, and a service layer may be used. | 2017-06-01 |
20170153933 | APPARATUS AND METHOD FOR DETECTING SINGLE FLIP-ERROR IN A COMPLEMENTARY RESISTIVE MEMORY - Described is an apparatus which comprises: a complementary resistive memory bit-cell; a first sense amplifier coupled to the complementary resistive memory bit-cell via access devices; a second sense amplifier coupled to the first sense amplifier and to the complementary resistive memory bit-cell via the access devices, wherein the second sense amplifier is operable to detect an error in the complementary resistive memory bit-cell. | 2017-06-01 |
20170153934 | MONITORING AND RESPONDING TO OPERATIONAL CONDITIONS OF A LOGICAL PARTITION FROM A SEPARATE LOGICAL PARTITION - Communicating with a logical partition of a computing system based on a separate logical partition in which each of one or more computing systems includes a central electronics complex (CEC) capable of concurrently operating multiple logical partitions, each CEC includes a support element (SE), in which the SE includes access to memory locations of each of the multiple logical partitions within memory of the CEC, and the SE has a mapping of the memory locations for each of the multiple logical partitions. A request to retrieve data from the memory of a logical partition with an operational condition is detected. The request is transferred to an SE interface which enables remote access to the logical partition with the operational condition, and in response to receiving the data, the data that includes the operations status from the memory location of the logical partition is displayed. | 2017-06-01 |
20170153935 | PERFORMANCE ENGINEERING PLATFORM USING PROBES AND SEARCHABLE TAGS - A performance engineering platform using one or more probes and one or more searchable tags is described. In an embodiment, a set of attributes of a system to be monitored are determined. Based on the attributes of the system, one or more probes that include functionality to detect data from the system are identified. Data is detected from the system using at least one of the probes. In an embodiment, one or more reports are obtained. The reports are based on data detected by a set of probes. An association between a particular searchable tag and one of the set of probes is received. Responsive to receiving the association between the particular searchable tag and the probe, report values, associated with a subset of the data detected by the probe, are identified. Further, the report values are tagged with the particular searchable tag. | 2017-06-01 |
20170153936 | ROOT-CAUSE IDENTIFICATION SYSTEM AND METHOD FOR IDENTIFYING ROOT-CAUSE OF ISSUES OF SOFTWARE APPLICATIONS - The present disclosure relates to a method for identifying root-cause of issues of software applications. The method comprises receiving one or more log files associated to software applications. The one or more log files are filtered to determine pattern of each log file of the one or more log files. One or more types of issues associated with each of the one or more log flies are determined based on the pattern of corresponding one or more log files. Trend of the one or more types of issues are estimated by comparing the one or more types of issues with historical data relating to corresponding pattern of log file. The root-cause of issues of the one or more software applications is identified based on at least one of the one or more types of issues and the trend of the one or more type of issues. | 2017-06-01 |
20170153937 | REQUESTER SPECIFIED TRANSFORMATIONS OF ENCODED DATA IN DISPERSED STORAGE NETWORK MEMORY - A method for execution by a computing device of a dispersed storage network (DSN). The method begins by receiving a data access request that includes a requested return data format, regarding a set of encoded data slices. The method continues with determining whether the requested return data format is a valid format and when valid, issuing data access requests to storage units of the DSN. When a decode threshold number of encoded data slices of the set of encoded data slices are received from the storage units, the method continues by decoding the encoded data slices to recover a data segment and determining whether a data type of the data segment is consistent with the requested return data format. When the data type of the data segment is consistent, the method continues by formatting the recovered data segment and sending the formatted and received data segment to the requested device. | 2017-06-01 |
20170153938 | METHOD, APPARATUS AND SYSTEM FOR AUTOMATICALLY REPAIRING DEVICE - A method, an apparatus, and a system are provided for automatically repairing a smart device in the field of computer technology. In the method, the apparatus receives a fault detection request transmitted by the smart device, the fault detection request carrying at least one current value of at least one preset parameter item of the smart device. The apparatus determines whether the at least one current value is within a preset range according to a first value characteristic. When it is determined that the at least one current value is within the preset range, the apparatus obtains first fault repair information corresponding to the first value characteristic from a correspondence table. The apparatus transmits the first fault repair information to the smart device, so that the smart device is automatically repaired according to the first fault repair information. | 2017-06-01 |
20170153939 | CONFIGURABLE RELIABILITY FOR MEMORY DEVICES - Technology relating to configurable reliability schemes for memory devices is disclosed. The technology includes a memory controller that selectively controls at least a type or an extent of a reliability scheme for at least a portion of a memory device. The technology also includes a computing device that can dynamically select and employ reliability schemes from a collection of different reliability schemes. A reliability scheme may be selected on a per-process, per-allocation request, per-page, per-cache-line, or other basis. The reliability schemes may include use of parity, use of data mirroring, use of an error correction code (ECC), storage of data without redundancy, etc. | 2017-06-01 |
20170153940 | RECOVERING DATA COPIES IN A DISPERSED STORAGE NETWORK - A method for use in a dispersed storage network operates to initiate retrieval of a decode threshold number of encoded data slices of each of one or more sets of encoded data slices in accordance with a first recovery approach. When a recovery time frame expires prior to receiving a second decode threshold number of encoded data slices of each of the one or more second sets of encoded data slices, the method proceeds to select a second data recovery approach that differs from the first recovery approach; recover a sufficient number of encoded data slices in accordance with the second data recovery approach; and dispersed storage error decode the sufficient number of encoded data slices to produce recovered data. | 2017-06-01 |
20170153941 | STORING DATA COPIES IN A DISPERSED STORAGE NETWORK - A method for use in a dispersed storage network operates to determine first information dispersal algorithm (IDA) parameters; determine second IDA parameters; divide data for storage to produce a plurality of first segments in accordance with the first IDA parameters and a plurality of second segments in accordance with the second IDA parameters; dispersed storage error encode the plurality of first segments utilizing the first IDA parameters to produce sets of first encoded data slices; dispersed storage error encode the plurality of second segments utilizing the second IDA parameters to produce sets of second encoded data slices; and facilitate storage of the sets of first encoded data slices and the sets of second encoded data slices in a plurality of storage units. | 2017-06-01 |
20170153942 | UTILIZING FAST MEMORY DEVICES TO OPTIMIZE DIFFERENT FUNCTIONS - A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. A computing device receives a data access request for an encoded data slice (EDS) associated with a data object. The computing device compares a slice name of the data access request with slice names stored within RAM. When the data access request slice name compares unfavorably with those stored slice names, the computing device transmits an empty data access response that includes no EDS to the other computing device without needing to access a hard disk drive (HDD) that stores EDSs. Alternatively, the computing device transmits a data access response that includes the EDS. | 2017-06-01 |
20170153943 | INITIATING REBUILD ACTIONS FROM DS PROCESSING UNIT ERRORS - A method begins by detecting a recovery error when decoding a seemingly valid threshold number of existing encoded data slice. The method continues by sending a notice of the recovery error and a known integrity check value for the data segment to a rebuild module. The method continues by the rebuild module retrieving the set of existing encoded data slices and selectively decoding a different combination of a decode threshold number of existing encoded data slices of the set of existing encoded data slices until the data segment is successfully recovered. The method continues by dispersed storage error encoding the successfully recovered data segment to produce a set of new encoded data slices. The method continues by comparing the seemingly valid encoded data slices with corresponding new encoded data slices on an encoded data slice by encoded data slice basis to identify a corrupted encoded data slice. | 2017-06-01 |
20170153944 | MAKING CONSISTENT READS MORE EFFICIENT IN IDA+COPY SYSTEM - A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. The computing device receives a data access request for a data object and determines a first revision number of a corresponding set of EDSs stored among first SU(s) and a second revision number of a corresponding trimmed copy of the set of EDSs stored among second SU(s). When the second revision number compares favorably to the first revision number, the computing device issues the data access request to the first SU(s) and/or the second SU(s) and issues the data access request for the data object to only the first SU(s) when it doesn't. | 2017-06-01 |
20170153945 | MEMORY MANAGEMENT SYSTEMS AND METHODS - The present invention facilitates efficient and effective utilization of storage management features. In one embodiment, a memory device comprises a memory interface, an ECC generation component, and storage components. The memory interface is configured to receive an access request to an address at which data is stored. The memory interface can also forward responses to the request including the data and ECC information associated with the data. The ECC generation component is configured to automatically establish an address at which the ECC information is stored based upon the receipt of the access request to an address at which data is stored. In one exemplary implementation, the internal establishment of the address at which the ECC information is stored is automatic. The storage components are configured to store the information. | 2017-06-01 |
20170153946 | PROCESS TO MIGRATE NAMED OBJECTS TO A DISPERSED OR DISTRIBUTED STORAGE NETWORK (DSN) - A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. The computing device receives data object information for a data object and stores the data object information in a dispersed index of a dispersed or distributed storage network (DSN). The computing device also dispersed error encodes the data object to generate sets of encoded data slices (EDSs) (e.g., for data segments of the data object) and updates the index entry state of the dispersed index to moving to indicate that the data object is moving. The computing device distributedly stores the sets of EDSs among a storage units (SUs) of the DSN. | 2017-06-01 |
20170153947 | REBUILDING SLICES IN A DISPERSED STORAGE NETWORK - A method for use in a dispersed storage network operates to select a recovery of selected ones of one or more first sets of encoded data slices in response to detecting a storage error associated with the selected ones of the one or more first sets of encoded data slices; issue requests for a second decode threshold number of encoded data slices of selected ones of one or more second sets of encoded data slices corresponding to the selected ones of the one or more first sets of encoded data slices; decode the second decode threshold number of encoded data slices to produce recovered data in response to receiving the second decode threshold number of encoded data slices; encode the recovered data utilizing first IDA parameters associated with the first IDA to produce one or more rebuilt encoded data slices corresponding to the selected ones of the one or more first sets of encoded data slices; and facilitate storage of the one or more rebuilt encoded data slices. | 2017-06-01 |
20170153948 | SECURING ENCODING DATA SLICES USING AN INTEGRITY CHECK VALUE LIST - A method includes retrieving a read threshold number of integrity check value list (ICVL) encoded data slices of a set of ICVL encoded data slices. The method further includes determining whether an appended ICVL of each ICVL encoded data slice of the read threshold number of ICVL encoded data slices substantially match. When the appended ICVL of one of the ICVL encoded does not substantially match the appended ICVL of other ICVL encoded data slices, the method further includes determining a likely cause for the mismatch. When the likely cause is missing a revision update, the method further includes initiate rebuilding of the encoded data slice portion. The method further includes generating an integrity check value for the rebuilt encoded data slice and updating the integrity check value list. The method further includes appending the updated integrity check value list to the rebuilt encoded data slice. | 2017-06-01 |
20170153949 | Switching Allocation of Computer Bus Lanes - The embodiments relate to dynamically allocating lanes of a computer bus. A computer system is configured with a plurality of connectors in communication with a module, with each connector configured to receive a respective adapter. The module detects a presence of each primary and backup adapter present, and controls an initial allocation of lanes to each detected primary adapter for maximizing adapter functionality. After the initial allocation and in response to detecting a failure of at least one primary adapter, the module dynamically switches lanes from the failed adapter to at least one of the one or more remaining primary adapters and the backup adapter. | 2017-06-01 |
20170153950 | DATA BACKUP USING METADATA MAPPING - An information processing apparatus, backup method, and program product that enable efficient differential backup. In one embodiment, an information processing apparatus for files stored in a storage device includes: a metadata management unit for managing metadata of files stored in the storage device; a map generation unit for generating a map which indicates whether metadata associated with an identification value uniquely identifying a file in the storage device is present or absent; and a backup management unit for scanning the metadata to detect files that have been created, modified, or deleted since the last backup, and storing at least a data block and the metadata for a detected file in a backup storage device as backup information in association with the identification value. | 2017-06-01 |
20170153951 | INCREMENTAL SYNCHRONOUS HIERARCHICAL SYSTEM RESTORATION - An incremental synchronous hierarchical system restoration system. A hierarchical system, such as a file system, that has an incompletely populated hierarchy, such as a directory structure, is incrementally restored in response to each of at least some successive hierarchical system commands. For instance, in some embodiments, the hierarchical system restoration may be a just-in-time hierarchical system restoration that restores portions of the hierarchical system hierarchy just in time to provide the visualizations used for each hierarchical system command response. By so doing, the restoration system provides the illusion that the hierarchical system has already been restored since the appropriate visualization and functionality is provided in response to each hierarchical system command, just as a fully populated hierarchical system would. The manner of acquiring and populating file system hierarchies is especially efficient so as to make such restoration possible in substantial real-time. | 2017-06-01 |
20170153952 | REVERSE NETWORK ADDRESS TRANSLATION FAILOVER - In an example system, a first interface has a first address and a first port number. A second interface has a second address and a second port number. A router is in communication with the first and second interfaces over a network. The router is configured to request, a first set of failover information from the first interface. The router is further configured to receive the first set of failover information from the first interface. The first set of failover information includes the second address and the first port number. The router is configured to detect a failure on the first interface. The router is further configured to modify a network access translation (NAT) table stored within the router by replacing the first address of the first interface with the second address of the second interface while retaining the first port number, such that the first port number remains unchanged. | 2017-06-01 |
20170153953 | CABLE REPLACEMENT IN A SYMMETRIC MULTIPROCESSING SYSTEM - Data communication between nodes of a symmetric multiprocessing (SMP) computing system can be maintained during the replacement of a faulty cable used to interconnect the nodes. A data bus error caused by the faulty cable is detected, resulting in the activation of an alternative data path between the nodes, and the disabling of a data path through the faulty cable. A system notification indicating the faulty cable is issued, and in response to the nodes being interconnected with a replacement cable, the replacement cable is tested for reliability. After the replacement cable is determined to be reliable, a data path through the replacement cable is activated. | 2017-06-01 |
20170153954 | METHOD FOR REMOTE ASYNCHRONOUS REPLICATION OF VOLUMES AND APPARATUS THEREFOR - A method for remote asynchronous volume replication and apparatus therefor. Asynchronous replication is applied to handle data changes on the source volume on the local site incurred by Host IO requests. In coordination with the “point-in-time differential backup” technology, the original data in the block to be written by a host IO request will be backuped to Source BAS on the local site (backup-on-write operation) only when the original data being written into the block of the source volume is different from the data of the corresponding block of the destination volume on the remote site. As a result, once a new data is written into the source volume completely, the host will be responded that its Host IO request is completed. Therefore, the data necessarily transmitted to the destination volume can be minimized, and the problem of remote data transmission limited by network bandwidth can be prevented effectively. | 2017-06-01 |
20170153955 | REDUNDANT STORAGE DEVICE, SERVER SYSTEM HAVING THE SAME, AND OPERATION METHOD THEREOF - A redundant storage device includes a first port, a second port different from the first port, a first storage device connected to the first port, and a second storage device connected to the second port. The first storage device changes an operation mode of the second storage device from a standby mode to an active mode using an internal communication. | 2017-06-01 |
20170153956 | Distributed Storage of Data - Multi-reliability regenerating (MRR) erasure codes are disclosed. The erasure codes can be used to encode and regenerate data. In particular, the regenerating erasure codes can be used to encode data included in at least one of two or more data messages to satisfy respective reliability requirements for the data. Encoded portions of data from one data message can be mixed with encoded or unencoded portions of data from a second data message and stored at a distributed storage system. This approach can be used to improve efficiency and performance of data storage and recovery in the event of failures of one or more nodes of a distributed storage system. | 2017-06-01 |
20170153957 | METHOD FOR TESTING A COMPUTER SYSTEM WITH A BASIC INPUT/OUTPUT SYSTEM INTERFACE PROGRAM - A method for testing a computer system includes activating an operation system of the computer system and selecting a first selectable number by a basic input/output system interface program. After selecting the first selectable number and rebooting the computer system, if a first enabled number of cores of the computer system is consistent with the first selectable number, but a second selectable number has not been selected, then select the second selectable number by the basic input/output system interface program. After selecting the second selectable number, if a second enabled number of cores of the computer system is consistent with the second selectable number, and no more number is selectable, then determine that the computer system has passed the test. | 2017-06-01 |
20170153958 | DETECTING DEGRADED CORE PERFORMANCE IN MULTICORE PROCESSORS - An embodiment of a system is disclosed, including an interface configured to communicate to a device under test (DUT). The DUT may include a plurality of processor cores. The system also includes a testing apparatus configured to concurrently measure a performance of a portion of each processor core to generate a first set of test values. Each test value of the first set may correspond to a given processor core of the plurality of processor cores. The testing apparatus may also be configured to analyze the first set of test values, and reject the DUT in response to a determination that at least one test value of the first set of test values exceeds a first threshold. | 2017-06-01 |
20170153959 | STREAMING ENGINE WITH DEFERRED EXCEPTION REPORTING - This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger. | 2017-06-01 |
20170153960 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device includes a memory and a processor coupled to the memory and configured to determine a priority level of an application that provides a service depending on a predetermined state, using relation information between the predetermined states, and control storing log of the application in the memory, depending on the priority level of the application. | 2017-06-01 |
20170153961 | SOLID STATE DISK - A solid state disk, including a main body, a processing unit, a display screen and a transmit port. The main body has a substrate and a shell portion covering on two opposite side faces of the substrate, the substrate is provided with a memory module; the processing unit is disposed in the main body; the display screen is attached to the main body and viewable from outside of the solid state disk, the display screen is electrically connected with the processing unit, the processing unit can control a display state of the display screen; and the transmit port is disposed on the substrate, and the transmit port is electrically connected with the memory module. | 2017-06-01 |
20170153962 | MONITORING THE PERFORMANCE OF THREADED APPLICATIONS - An ability to monitor the performance of a threaded application is provided. A thread that is executing is detected, wherein the thread is spawned by a threaded application. A thread class of the thread is determined. A performance metric of the thread is measured. A trend that describes a consumption of the performance metric as a function of percent execution time is interpolated. In response to determining that a threshold associated with the performance metric is exceeded based on a comparison of the trend to a trend template that is associated with the performance metric, an alert is issued. The alert identifies the thread as an abnormally executed thread in order to trigger a corrective action that improves a performance of a computing device that is configured to execute the threaded application. | 2017-06-01 |
20170153963 | Method and System for Pre-Deployment Performance Estimation of Input-Output Intensive Workloads - A method and system is provided for pre-deployment performance estimation of input-output intensive workloads. Particularly, the present application provides a method and system for predicting the performance of input-output intensive distributed enterprise application on multiple storage devices without deploying the application and the complete database in the target environment. The present method comprises of generating the input-output traces of an application on a source system with varying concurrencies; replaying the generated traces from the source system on a target system where application needs to be migrated; gathering performance data in the form of resource utilization, through-put and response time from the target system; extrapolating the data gathered from the target system in order to accurately predict the performance of multi-threaded input-output intensive applications in the target system for higher concurrencies. | 2017-06-01 |
20170153964 | MONITORING AND RESPONDING TO OPERATIONAL CONDITIONS OF A LOGICAL PARTITION FROM A SEPARATE LOGICAL PARTITION - Communicating with a logical partition of a computing system based on a separate logical partition in which each of one or more computing systems includes a central electronics complex (CEC) capable of concurrently operating multiple logical partitions, each CEC includes a support element (SE), in which the SE includes access to memory locations of each of the multiple logical partitions within memory of the CEC, and the SE has a mapping of the memory locations for each of the multiple logical partitions. A request to retrieve data from the memory of a logical partition with an operational condition is detected. The request is transferred to an SE interface which enables remote access to the logical partition with the operational condition, and in response to receiving the data, the data that includes the operations status from the memory location of the logical partition is displayed. | 2017-06-01 |
20170153965 | LISTING OPTIMAL MACHINE INSTANCES - A method for listing optimal machine instances in a computing environment based on user context is provided. The method includes receiving a task request based on a first task to be performed within the computing environment, identifying one or more similar tasks by comparing metadata for the first task to metadata for a plurality of other tasks based on a classification analysis, selecting the one or more similar tasks based on a result from the classification analysis exceeding a predetermined confidence level, and generating a list of one or more previous machine instances corresponding to the one or more similar tasks. The list of previous machine instances is associated with instructions to commence the previous machine instances. The plurality of other tasks include previous tasks performed within the computing environment on corresponding previous machine instances. The machine instances may include a virtual machine (VM) instance or a physical machine instance. | 2017-06-01 |
20170153966 | STREAMS: INTELLIGENT OPERATOR SUBSET FOR DEBUG - Techniques are disclosed for identifying a minimal operator subsets in a distributed streams application for debugging purposes. A debugging tool receives a selection of operators from a plurality of operators included in a distributed application. The distributed application executes the plurality of operators in a runtime environment. The debugging tool identifies, based on the selected operators, a subset of the plurality of operators to execute in a debugging environment. The subset includes at least the selected operators. The debugging tool executes the subset of the plurality of operators in the debugging environment. | 2017-06-01 |
20170153967 | USING MODEL-BASED DIAGNOSIS TO IMPROVE SOFTWARE TESTING - An artificial intelligence based method for improving a software testing process, according to which upon finding a bug, a set of candidate diagnoses is proposed to the tester, based on a Model-Based Diagnosis (MBD) process. A planning process is used for automatically suggesting further test steps to be performed by the tester, to identify the correct diagnosis for the developer in the form of faulty software component that caused the bug, while minimizing the tests steps performed by the tester. Additional information is provided to the MBD process, based on the outputs of the further test steps, thereby pruning incorrect candidate diagnoses. These steps are iteratively repeated while in each time, minimizing the set of candidate diagnoses, until a single diagnosis remains in the set. | 2017-06-01 |
20170153968 | DATABASE CONFIGURATION CHECK - A system includes determination of a plurality of database configuration checks, execution of the plurality of database configuration checks against database configuration data to generate, for each of the plurality of database configuration checks, a respective result, display of one of the respective results, the displayed respective result associated with a negative result outcome, and display of information to assist resolution of the negative result outcome. | 2017-06-01 |
20170153969 | SYSTEM AND METHOD FOR EXECUTING INTEGRATION TESTS IN MULTIUSER ENVIRONMENT - System and method for executing integration tests for testing software code are disclosed. The system comprises multi-user integrated test framework to simulate a multi-user test environment for executing integration tests concurrently. Multi-user integrated test framework receives ‘N’ as number of concurrent users for integration test project comprising integration tests associated with software code. The multi-user integrated test framework simulates multi-user test environment comprising locating a target assembly associated with the integration test project and generating executing assembly by using the integration tests from the target assembly. Simulating the multi-user test environment further comprises dissociating target assembly from executing assembly such that target assembly is prevented from locking and concurrently executing the integration tests, N number of times, by using executing assembly and parallel task library to obtain outcome of each of the plurality of the integration tests. | 2017-06-01 |
20170153970 | GLOBALIZATION TESTING MANAGEMENT USING A SET OF GLOBALIZATION TESTING OPERATIONS - Disclosed aspects may include collecting a set of globalization data. The set of globalization data may relate to a set of globalization parameters. Based on the set of globalization data, it may be determined to execute a set of globalization testing operations. Accordingly, the set of globalization data may be processed by executing the set of globalization testing operations. In response to the processing, a set of globalization test output data can be established. | 2017-06-01 |
20170153971 | GLOBALIZATION TESTING MANAGEMENT SERVICE CONFIGURATION - Disclosed aspects may include examining a set of product development data of a product development environment. In response to the examining, a set of globalization data may be identified. The set of globalization data may relate to a set of globalization parameters. In response to the identifying, the set of globalization data may be transmitted. Disclosed aspects may include receiving a set of globalization data which relates to a set of globalization parameters. By processing the set of globalization data using a set of globalization testing operations, a globalization test output can be determined. In response to the determining, the globalization test output can be provided. | 2017-06-01 |
20170153972 | Relocating A Virtual Address In A Persistent Memory - Some examples described herein relate to relocating a virtual address in a persistent memory. An example includes determining whether a base address of a virtual address segment in a persistent memory has changed. In response to the determination that the base address of the virtual address segment has changed, an offset value between the base address of the virtual address segment and a new base address of the virtual address segment is determined. The offset value is used to relocate a virtual address of a primary data structure in the virtual address segment from a present location to a new location in the persistent memory. Then, a present location of a virtual address of an associated data structure of the primary data structure in the virtual address segment is determined. The offset value is used to relocate the virtual address of the associated data structure of the primary data structure from a current location to another location in the persistent memory. | 2017-06-01 |
20170153973 | FACILITATING EFFICIENT GARBAGE COLLECTION BY DYNAMICALLY COARSENING AN APPEND-ONLY LOCK-FREE TRIE - The disclosed embodiments provide a remembered set implementation for use during an incremental garbage collection, wherein the implementation includes a trie that can be dynamically coarsened to conserve memory. During operation, responsive to storing a reference into a location in a referenced memory area during the execution of a software program, the system finds, within a trie that serves as a remembered set for the referenced memory area, a particular entry that corresponds to a particular address range that covers the location. The system then marks the particular entry to indicate that the particular address range should be processed during a garbage collection. Based on a policy, the system then coarsens a particular subtree of the trie in which the particular entry is stored. Next, during the garbage collection, the system processes a particular larger address range when a root entry of the particular subtree is visited. | 2017-06-01 |
20170153974 | DUAL SPACE STORAGE MANAGEMENT SYSTEM AND DATA READ/WRITE METHOD - A computer system includes an addressing assembly, connected respectively to high bits of a memory address line of a processor and high bits of a word address line of a storage, and used to convert, in a preset continuous or discrete range on the storage, high bits of a memory address formed by the processor into high bits of a corresponding word address of the storage and output the high bits to the storage. Low bits of the memory address line of the processor are connected to low bits of the word address line of the storage. The preset range is smaller than or equal to an addressing range of the memory address line of the processor. The processor changes storage units of the storage covered by the preset range by changing the preset range. Thus it reduces cost, improves operation efficiency, shortens operation time, and has wide applicability. | 2017-06-01 |
20170153975 | APPARATUS AND METHOD FOR HANDLING ATOMIC UPDATE OPERATIONS - An apparatus and method are provided for handling atomic update operations. The apparatus has a cache storage to store data for access by processing circuitry, the cache storage having a plurality of cache lines. Atomic update handling circuitry is used to handle performance of an atomic update operation in respect of data at a specified address. When data at the specified address is determined to be stored within a cache line of the cache storage, the atomic update handling circuitry performs the atomic update operation on the data from that cache line. Hazard detection circuitry is used to trigger deferral of performance of the atomic update operation upon detecting that a linefill operation for the cache storage is pending that will cause a chosen cache line to be populated with data that includes data at the specified address. The linefill operation causes the apparatus to receive a sequence of data portions that collectively form the data for storing in the chosen cache line. Partial linefill notification circuitry is used to provide partial linefill information to the atomic update handling circuitry during the linefill operation, and the atomic update handling circuitry is arranged to initiate the atomic update operation responsive to detecting from the partial linefill information that the data at the specified address is available for the chosen cache line. This can provide a performance benefit, by avoiding the need for the atomic update handling circuitry to await completion of the linefill operation before beginning the atomic update operation. | 2017-06-01 |
20170153976 | ASYNCHRONOUS CLEANUP AFTER A PEER-TO-PEER REMOTE COPY (PPRC) TERMINATE RELATIONSHIP OPERATION - For asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation in a computing storage environment by a processor device, asynchronously cleaning up a plurality of PPRC modified sectors bitmaps using a PPRC terminate-relationship cleanup operation by throttling a number of tasks performing the PPRC terminate-relationship cleanup operation, and terminating the PPRC relationship by calling a cache to perform a terminate cleanup bind segment scan operation on a plurality of bind segments. | 2017-06-01 |
20170153977 | DATA CACHING - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component. | 2017-06-01 |
20170153978 | OPTIMIZED CACHING OF SLICES BY A DS PROCESSING UNIT - A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and a processing module operably coupled to the interface and memory such that the processing module, when operable within the computing device based on the operational instructions, is configured to perform various operations. A computing device receives a data access request involving a set of EDSs associated with a data object that are distributedly stored among storage units (SUs) including first SU(s) coupled via a local network of the DSN and second SU(s) remotely located to the computing device and coupled via an external network of the DSN. The computing device caches within the at least one memory therein a subset of EDSs stored within the second SU(s) remotely located to the computing device and coupled to the computing device via the external network. | 2017-06-01 |
20170153979 | DYNAMIC RESULT SET CACHING WITH A DATABASE ACCELERATOR - According to one embodiment of the present invention, a system for processing a database query stores one or more result sets for one or more first database queries in a data store. The system receives a second database query and compares the second database query to the one or more first database queries to determine presence of a corresponding result set in the data store for the second database query. The system provides the corresponding result set from the data store for the second database query based on the comparison. Embodiments of the present invention further include a method and computer program product for processing a database query in substantially the same manners described above. | 2017-06-01 |
20170153980 | ANONYMIZED NETWORK ADDRESSING IN CONTENT DELIVERY NETWORKS - Systems, methods, apparatuses, and software for a content delivery network that caches content for delivery to end user devices is presented. In one example, a content delivery network (CDN) is presented having a plurality of cache nodes that cache content for delivery to end user devices. The CDN includes an anonymization node configured to establish anonymized network addresses for transfer of content to cache nodes from one or more origin servers that store the content before caching by the CDN. The anonymization node is configured to provide indications of relationships between the anonymized network addresses and the cache nodes to a routing node of the CDN. The routing node is configured to route the content transferred by the one or more origin servers responsive to content requests of the cache nodes based on the indications of the relationships between the anonymous network addresses to the cache nodes. | 2017-06-01 |
20170153981 | Using Shared Virtual Memory Resources for Performing Memory-Mapping - Functionality is described herein for memory-mapping an information unit (such as a file) into virtual memory by associating shared virtual memory resources with the information unit. The functionality then allows processes (or other entities) to interact with the information unit via the shared virtual memory resources, as opposed to duplicating separate private instances of the virtual memory resources for each process that requests access to the information unit. The functionality also uses a single level of address translation to convert virtual addresses to corresponding physical addresses. In one implementation, the information unit is stored on a bulk-erase type block storage device, such as a flash storage device; here, the single level of address translation incorporates any address mappings identified by wear-leveling and/or garbage collection processing, eliminating the need for the storage device to perform separate and independent address mappings. | 2017-06-01 |
20170153982 | INVALIDATION OF TRANSLATION LOOK-ASIDE BUFFER ENTRIES BY A GUEST OPERATING SYSTEM - Embodiments disclose techniques for enabling a guest operating system (OS) to directly invalidate entries in a translation lookaside buffer (TLB). In one embodiment, the guest OS receives one or more invalidation credits for invalidating translation entries in a translation lookaside buffer (TLB) from a hypervisor. The guest OS decrements one invalidation credit from the one or more invalidation credits after invalidating a translation entry in the TLB. Upon determining that there are no remaining invalidation credits, the guest OS requests additional invalidation credits from the hypervisor. The hypervisor may choose to allocate the additional invalidation credits, based upon a determination of whether or not the guest OS is a rogue OS that poses a threat or risk to other guest OS in a computing system. | 2017-06-01 |
20170153983 | SUPERVISORY MEMORY MANAGEMENT UNIT - A system includes a central processing unit (CPU) to process data with respect to a virtual address generated by the CPU. A first memory management unit (MMU) translates the virtual address to a physical address of a memory with respect to the data processed by the CPU. A supervisory MMU translates the physical address of the first MMU to a storage address for storage and retrieval of the data in the memory. The supervisory MMU controls access to the memory via the storage address generated by the first MMU. | 2017-06-01 |
20170153984 | APPARATUS AND METHOD FOR ACCELERATING OPERATIONS IN A PROCESSOR WHICH USES SHARED VIRTUAL MEMORY - An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory. | 2017-06-01 |
20170153985 | METHOD TO EFFICIENTLY IMPLEMENT SYNCHRONIZATION USING SOFTWARE MANAGED ADDRESS TRANSLATION - Software-managed resources are used to utilize effective-to-real memory address translation for synchronization among processes executing on processor cores in a multi-core computing system. A failure to find a pre-determined effective memory address translation in an effective-to-real memory address translation table on a first processor core triggers an address translation exception in a second processor core and causes an exception handler on the second processor core to start a new process, thereby acting as a means to achieve synchronization among processes on the first processor core and the second processor core. The specific functionality is implemented in the exception handler, which is tailored to respond to the exception based on the address that generated it. | 2017-06-01 |
20170153986 | CACHE LONGEVITY DETECTION AND REFRESH - A web server cache performs verification of cached computational results by storing a computed function result as a cached value in a cache, and upon receiving a subsequent invocation of the function, examining a duration of the value in the cache. The web server compares, if the duration exceeds a staleness detection threshold, a result of a subsequent execution of the function to the cached value in response to the subsequent invocation by recomputing, a result from execution of the function for validating the cached value, and flags an error if the duration exceeds the staleness detection threshold and the result differs from the cached value. Alternatively, the method returns, if the duration of the cache value is within the staleness detection threshold, the cache value as the result of the subsequent invocation. | 2017-06-01 |
20170153987 | IMPLICIT SHARING IN STORAGE MANAGEMENT - A physical address of a page may be identified. A first process that implements copy-on-read techniques for the page may be detected. A determination may be made that the first process is not expected to write to the page. In response to that determination, a different logical address may be established for the first process for the page from the logical address of a second process for the page, but the two logical addresses may be mapped to the same physical page. | 2017-06-01 |
20170153988 | INTEGRATED CIRCUIT SECURITY - An integrated circuit, having a security supervision system, comprising a plurality of functional circuit blocks interconnected to collectively performing data processing tasks, one or more communication adaptors, having: (i) a hardware interconnection to the functional circuit blocks, whereby the communication adaptor senses the state and/or activity of the functional circuit block; (ii) memory storing definitions of state and/or activity of functional circuit block and actions for each definition; and (iii) processing circuitry comparing the state and/or activity of the functional block with each definition, such that when state and/or activity of the functional block corresponding to a stored definition is detected, perform the corresponding action. The memory stores a definition of state and/or activity characteristic of insecure operation of the functional circuit block and a corresponding action to partially disabling the functional circuit block and/or (ii) causing a message to be transmitted to a destination off the integrated circuit. | 2017-06-01 |