02nd week of 2018 patent applcation highlights part 40 |
Patent application number | Title | Published |
20180011730 | MANAGEMENT OF NETWORK FUNCTIONS VIRTUALIZATION AND ORCHESTRATION APPARATUS, SYSTEM, MANAGEMENT METHOD, AND PROGRAM - Provided a management apparatus including a maintenance mode setting unit that transitions a first virtualization infrastructure (NFVI0) to a maintenance mode, a mobility control unit that at least instructs a virtualization deployment unit (VDU) on the first virtualization infrastructure in the maintenance mode to move to a second virtualization infrastructure (NFVI1), and a maintenance mode release unit that releases the maintenance mode of the first virtualization infrastructure (NFVI0). | 2018-01-11 |
20180011731 | STORAGE ARCHITECTURE FOR VIRTUAL MACHINES - Some embodiments of the present invention include a method comprising: accessing units of network storage that encode state data of respective virtual machines, wherein the state data for respective ones of the virtual machines are stored in distinct ones of the network storage units such that the state data for more than one virtual machine are not commingled in any one of the network storage units. | 2018-01-11 |
20180011732 | ARCHITECTURE FOR IMPLEMENTING A VIRTUALIZATION ENVIRONMENT AND APPLIANCE - An improved architecture is provided which enables significant convergence of the components of a system to implement virtualization. The infrastructure is VM-aware, and permits scaled out converged storage provisioning to allow storage on a per-VM basis, while identifying I/O coming from each VM. The current approach can scale out from a few nodes to a large number of nodes. In addition, the inventive approach has ground-up integration with all types of storage, including solid-state drives. The architecture of the invention provides high availability against any type of failure, including disk or node failures. In addition, the invention provides high performance by making I/O access local, leveraging solid-state drives and employing a series of patent-pending performance optimizations. | 2018-01-11 |
20180011733 | EXITLESS TIMER ACCESS FOR VIRTUAL MACHINES - A system and method of scheduling timer access includes a first physical processor with a first physical timer executing a first guest virtual machine. A hypervisor determines an interrupt time remaining before an interrupt is scheduled and determines the interrupt time is greater than a threshold time. Responsive to determining that the interrupt time is greater than the threshold time, the hypervisor designates a second physical processor as a control processor with a control timer and sends, to the second physical processor, an interval time, which is a specific time duration. The hypervisor grants, to the first guest virtual machine, access to the first physical timer. The second physical processor detects that the interval time expires. Responsive to detecting that the interval time expired, an inter-processor interrupt is sent from the second physical processor to the first physical processor, triggering the first guest virtual machine to exit to the hypervisor. | 2018-01-11 |
20180011734 | JOB SCHEDULER TEST PROGRAM, JOB SCHEDULER TEST METHOD, AND INFORMATION PROCESSING APPARATUS - A non-transitory computer-readable storage medium storing therein a job scheduler test program that causes a computer to execute a process includes: determining whether or not a state of every thread of a test-target job scheduler is a standby state; and changing a time of a system referenced when the thread executes a process to a time that is put forward in a case where the state of every thread is the standby state. | 2018-01-11 |
20180011735 | INSTRUCTION PRE-FETCHING - Pre-fetching instructions for tasks of an operating system (OS) is provided by calling a task scheduler that determines a load start time for a set of instructions for a particular task corresponding to a task switch condition. The OS calls, and in response to the load start time, a loader entity module that generates a pre-fetch request that loads the set of instructions for the particular task from a non-volatile memory circuit into a random access memory circuit. The OS calls the task scheduler to switch to the particular task. | 2018-01-11 |
20180011736 | HARDWARE CONTROLLED INSTRUCTION PRE-FETCHING - A task control circuit maintains, in response to task event information, a task information queue that includes task information for a plurality of tasks. Based upon the task information in the task information queue, a future task switch condition is identified as corresponding to a task switch time for a particular task of the plurality of tasks. A load start time is determined for a set of instructions for the particular task. A pre-fetch request is generated to load the set of instructions for the particular task into the memory circuit. The pre-fetch request is forwarded to a hardware loader circuit. In response to the task switch time, a task event trigger is generated for the particular task. The hardware loader circuit is used to load, in response to the pre-fetch request, the set of instructions from a non-volatile memory into the memory circuit. | 2018-01-11 |
20180011737 | OPTIMIZING JOB EXECUTION IN PARALLEL PROCESSING - Scheduling jobs from an application based on a job concurrency hint. The job concurrency hint providing an indication of the number and/or size of the jobs that can be handled by the job scheduler. The scheduling of the jobs based on the job concurrency hint including selecting the number and/or size of the jobs to pass to the job scheduler for execution by a thread in a core of a processor. | 2018-01-11 |
20180011738 | METHOD FOR EXECUTING MULTITHREADED INSTRUCTIONS GROUPED INTO BLOCKS - A method for executing multithreaded instructions grouped into blocks. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein the instructions of the instruction blocks are interleaved with multiple threads; scheduling the instructions of the instruction block to execute in accordance with the multiple threads; and tracking execution of the multiple threads to enforce fairness in an execution pipeline. | 2018-01-11 |
20180011739 | DATA FACTORY PLATFORM AND OPERATING SYSTEM - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for executing software on distributed computing systems. In one aspect, a method comprises executing a workflow controller configured to manage a plurality of factory workflows; executing a plurality of extraction workers, each configured to extract data from a plurality of external data sources into plurality of extracted datasets; executing a plurality of intermediate workers, each configured to integrate and/or contextualize the extracted data sets into a plurality of intermediate datasets; and executing a plurality of visualization workers, each configured to use the intermediate datasets to produce an interactive display or a plurality of reports based on the intermediate datasets. | 2018-01-11 |
20180011740 | Computing Resource Inventory System - Systems and methods of managing computing resources of a computing system are described. A computing resource list and computing resource information may be stored at a data store. The computing resource list may identify a set of computing resources of a computing system, and the computing resource information may respectively describe the computing resources. The computing resource list may be updated in response to a new computing resource being added to the computing system or in response to an existing computing resource being removed from the computing system. Evaluation tasks for the computing resources may be performed, and a resource evaluation report may be generated during performance of at least one of the evaluation reports. | 2018-01-11 |
20180011741 | INTEROPERABILITY-AS-A-SERVICE IN A CLOUD ENVIRONMENT - Methods, devices, and techniques for determining interoperable resources are discussed herein. For example, in one aspect, a resource in a cloud environment may be discovered. Responsive to discovering the resource, an interoperability support matrix associated with the resource can be obtained. The interoperability support matrix may specify another resource that interoperates with the resource. An interoperability record is then stored in an interoperability support matrix repository. The interoperability record can specify that the another resource interoperates with the resource. | 2018-01-11 |
20180011742 | JOB SCHEDULING MANAGEMENT - Resource utilization data for a set of system components of a computing system is collected. The resource utilization data includes performance records for a set of jobs. By analyzing the collected resource utilization data for the set of system components, a resource allocation is identified for a particular job of the set of jobs. A first execution time for the particular job is determined based on the resource allocation for the particular job and the resource utilization data for the set of system components. A location at which to execute the particular job is determined based on how the particular job has been executed at the location previously. The first execution time may be a time when the computer system achieves a resource availability threshold with respect to the resource allocation. Aspects are also directed toward performing the particular job at the first execution time. | 2018-01-11 |
20180011743 | JOB SCHEDULING MANAGEMENT - Resource utilization data for a set of system components of a computing system is collected. The resource utilization data includes performance records for a set of jobs. By analyzing the collected resource utilization data for the set of system components, a resource allocation is identified for a particular job of the set of jobs. A first execution time for the particular job is determined based on the resource allocation for the particular job and the resource utilization data for the set of system components. A location at which to execute the particular job is determined based on how the particular job has been executed at the location previously. The first execution time may be a time when the computer system achieves a resource availability threshold with respect to the resource allocation. Aspects are also directed toward performing the particular job at the first execution time. | 2018-01-11 |
20180011744 | DETERMINING WHEN TO RELEASE A LOCK FROM A FIRST TASK HOLDING THE LOCK TO GRANT TO A SECOND TASK WAITING FOR THE LOCK - Provided are a computer program product, system, and method for determining when to release a lock from a first task holding the lock to grant to a second task waiting for the lock. A determination is made as to whether a holding of a lock to the resource by a first task satisfies a condition and whether the lock is swappable. The lock is released from the first task and granted to a second task waiting in a queue for the lock in response to determining that the holding of the lock satisfies the condition and that the lock is swappable. The first task is indicated in the queue waiting for the lock in response to granting the lock to the second task. | 2018-01-11 |
20180011745 | ITERATIVE AND HIERARCHICAL PROCESSING OF REQUEST PARTITIONS - Methods and systems disclosed herein relate generally to temporally prioritizing queries of queue-task partitions based on distributions of flags assigned to bits corresponding to access rights. | 2018-01-11 |
20180011746 | CONTROLLING DATA PROCESSING TASKS - Information representative of a graph-based program specification has a plurality of components, each of which corresponds to a task, and directed links between ports of said components. A program corresponding to said graph-based program specification is executed. A first component includes a first data port, a first control port, and a second control port. Said first data port is configured to receive data to be processed by a first task corresponding to said first component, or configured to provide data that was processed by said first task corresponding to said first component. Executing a program corresponding to said graph-based program specification includes: receiving said first control information at said first control port, in response to receiving said first control information, determining whether or not to invoke said first task, and after receiving said first control information, providing said second control information from said second control port. | 2018-01-11 |
20180011747 | METHOD AND APPARATUS FOR LOAD ESTIMATION - A disclosed load estimation method includes: collecting run information of a processor being executing a predetermined program; specifying execution status of the processor based on the collected run information; and estimating a load of the predetermined program based on a result of comparison between the execution status of the processor and execution characteristics of the processor. Each of the execution characteristics is stored in association with a load level of the predetermined program. | 2018-01-11 |
20180011748 | POST-RETIRE SCHEME FOR TRACKING TENTATIVE ACCESSES DURING TRANSACTIONAL EXECUTION - A method and apparatus for post-retire transaction access tracking is herein described. Load and store buffers are capable of storing senior entries. In the load buffer a first access is scheduled based on a load buffer entry. Tracking information associated with the load is stored in a filter field in the load buffer entry. Upon retirement, the load buffer entry is marked as a senior load entry. A scheduler schedules a post-retire access to update transaction tracking information, if the filter field does not represent that the tracking information has already been updated during a pendency of the transaction. Before evicting a line in a cache, the load buffer is snooped to ensure no load accessed the line to be evicted. | 2018-01-11 |
20180011749 | SYSTEM AND METHOD FOR EXTENDING A WEB SERVICE ENVIRONMENT TO SUPPORT SCALABLE ASYNCHRONOUS CLIENTS - An asynchronous transport enables decoupling the delivery of an operation's request and response messages, from the request and response cycle of a single exchange on the transport. The response message need not be delivered as the response to the initial transport request, but can instead be delivered by the web service to a response endpoint or other location selected by the client, using a new connection originating from the service. In accordance with an embodiment, the client communicates with the web service via request messages that include a SOAP header, formatted according to an XML format, and conveyed using the HTTP. The asynchronous transport automatically deploys, for a client, an endpoint reference that will ultimately receive corresponding response messages. This allows the response to ultimately return to the client application in a manner consistent with the JAX-WS specification, without the developer having to change their original client application. | 2018-01-11 |
20180011750 | SYNCHRONIZATION OF CODE EXECUTION - A system for determining a toggle value includes an input interface and a processor. The input interface is to receive a request for the toggle value associated with a toggle. The processor is to determine an indicated toggle value associated with the toggle; determine the toggle value associated with the toggle based at least in part on the indicated toggle value and a set of dependencies; and provide the toggle value associated with the toggle. | 2018-01-11 |
20180011751 | Unmanned Ground and Aerial Vehicle Attachment System - Techniques are disclosed for hot swapping one or more module devices on a single host device. A module device can perform module-specific tasks that are defined in its module software driver. Using one or more application programming interfaces, the host device communicates with the module device's module software driver to allow the module device to perform module-specific tasks while removably connected to the host device. | 2018-01-11 |
20180011752 | COMPUTER READABLE STORAGE MEDIA FOR DYNAMIC SERVICE DEPLOYMENT AND METHODS AND SYSTEMS FOR UTILIZING SAME - Systems and methods for service deployment are disclosed herein. Certain implementations may include a memory encoded with computer executable instructions that when executed cause a processing unit to operate a service deployment engine and use consistent APIs both (a) internally via a package API when consuming deployment packages in order to expose them, and (b) externally via a service API when exposing available packages and services to the outside world or enterprise server. By doing so, calling applications can depend on the consistency of the service API engine while the enterprise server itself can reliably consume and interact with a dynamic set of packages organized in a consistent and predictable way. The service deployment engine may be configured to act as a dynamic library loader to interrogate, deploy, start/stop, and/or uninstall packages and services in real time. The packages and services may all implement the same package API. | 2018-01-11 |
20180011753 | ADAPTIVE READ THRESHOLD VOLTAGE TRACKING WITH BIT ERROR RATE ESTIMATION BASED ON NON-LINEAR SYNDROME WEIGHT MAPPING - Adaptive read threshold voltage tracking techniques are provided that employ bit error rate estimation based on a non-linear syndrome weight mapping. An exemplary device comprises a controller configured to determine a bit error rate for at least one of a plurality of read threshold voltages in a memory using a non-linear mapping of a syndrome weight to the bit error rate for the at least one of the plurality of read threshold voltages. | 2018-01-11 |
20180011754 | NONVOLATILE MEMORY SYSTEM AND ERROR DETERMINATION METHOD THEREOF - A memory system may be provided. The memory system may include a memory apparatus including a plurality of memory cells. The memory system may include and a controller configured to control a write operation and a read operation with respect to the memory apparatus, detect an error occurrence position by performing the write operation and the read operation on a corresponding region of the memory apparatus in which an error occurs based on error occurrence address information generated in the read operation while changing a level of data to be written, and determine a type of error based on the detected error occurrence position. | 2018-01-11 |
20180011755 | INFORMATION PROCESSING APPARATUS FOR ANALYZING HARDWARE FAILURE AND INFORMATION PROCESSING SYSTEM THEREFOR - It is provided an information processing apparatus. The information processing apparatus includes memory, a processor configured to control a device, a circuit connected with the memory, the processor and the device and configured to store a first sequence which causes a failure of the device in a first storage area in the memory, store a second sequence which prevents the failure in a second storage area in the memory, determine whether a third sequence for controlling the device included in a packet output from the processor is the first sequence, coordinate the third sequence by using the second sequence when the third sequence is the first sequence, and generate a packet including the coordinated third sequence. | 2018-01-11 |
20180011756 | INFORMATION PROCESSING DEVICE AND METHOD OF TESTING - An information processing device includes a first port and a processor coupled to the first port and configured to transmit, via the first port, a first signal to a first device coupled to the first port, cause a second device coupled to the first port to determine whether a failure is present in the first port when the information processing device does not receive a first response signal in response to the first signal, and determine that the failure is present in the first device when the second device does not determine that the failure is present in the first port. | 2018-01-11 |
20180011757 | ERROR CORRECTION CODE MANAGEMENT OF WRITE-ONCE MEMORY CODES - Disclosed embodiments include an electronic device having a write-once memory (WOM) and a memory controller. The memory controller includes a host interface receiving a data word including first and second symbols, each having at least two bits, a WOM controller that encodes the first and second symbols and outputs a WOM-encoded word including first and second WOM codes corresponding to the first and second symbols, respectively, an error correction code (ECC) controller that encodes the WOM-encoded word and outputs an ECC-encoded word including the first and second WOM codes and a first set of ECC bits corresponding to a first write operation, and a memory device interface that writes the ECC-encoded word the WOM device in the first write operation. Each of the first and second WOM codes include at least three bits with at least two of the at least three bits having the same logic value. | 2018-01-11 |
20180011758 | SYSTEM AND METHOD FOR REDUCING ECC OVERHEAD AND MEMORY ACCESS BANDWIDTH - A system, and corresponding method, is described for updating or calculating ECC where the transaction volume is significantly reduced from a read-modify-write to a write, which is more efficient and reduces demand on the data access bandwidth. The invention can be implemented in any chip, system, method, or HDL code that perform protection schemes and require ECC calculation, of any kind. Embodiments of the invention enable IPs that use different protections schemes to reduce power consumption and reduce bandwidth access to more efficiently communicate or exchange information. | 2018-01-11 |
20180011759 | HIGH PERFORMANCE INTERCONNECT LINK LAYER - Transaction data is identified and a flit is generated to include three or more slots and a floating field to be used as an extension of any one of two or more of the slots. In another aspect, the flit is to include two or more slots, a payload, and a cyclic redundancy check (CRC) field to be encoded with a 16-bit CRC value generated based on the payload. The flit is sent over a serial data link to a device for processing, based at least in part on the three or more slots. | 2018-01-11 |
20180011760 | MEMORY SYSTEM AND METHOD OF CONTROLLING NONVOLATILE MEMORY - According to one embodiment, a memory system includes a nonvolatile memory and a controller. The controller manages a plurality of namespaces for storing a plurality of kinds of data having different update frequencies. The controller encodes write data by using first coding for reducing wear of a memory cell to generate first encoded data, and generates second encoded data to be written to the nonvolatile memory by adding an error correction code to the first encoded data. The controller changes the ratio between the first encoded data and the error correction code based on the namespace to which the write data is to be written. | 2018-01-11 |
20180011761 | HOT-READ DATA AGGREGATION AND CODE SELECTION - An apparatus comprises a memory and a controller. The memory generally comprises a plurality of memory modules, each having a size less than a total size of the memory and configured to store data. The controller may be configured to process a plurality of read/write operations, classify data pages from multiple blocks of the memory as hot-read data or non hot-read data, and aggregate the hot-read data by selecting one or more of the hot-read data pages from multiple memory blocks and mapping the selected hot-read data pages to dedicated hot-read data blocks using a strong type of error correcting code during one or more of a garbage collection state, a data recycling state, or an idle state. The aggregation of the hot-read data pages and use of the strong type of error correcting code reduces read latency of the hot-read data pages, reduces a frequency of data recycling of the hot-read data pages, and reduces an impact of read disturbs on endurance of the memory. | 2018-01-11 |
20180011762 | POOL-LEVEL SOLID STATE DRIVE ERROR CORRECTION - A method for performing error correction for a plurality of storage drives and a storage appliance comprising a plurality of storage devices is disclosed. In one embodiment, the method includes generating a first set of parity bits from a first set of data of at least one of the plurality of storage devices, the first set of parity bits capable of correcting a first number of error bits of the first set of data. The method further includes generating a second set of parity bits from a concatenated set of the first data and a second set of data from at least another of the plurality of storage devices, the second set of parity bits capable of correcting a second number of error bits of the first set of data, the second number being greater than the first number. The method further includes reading the first set of data and (i) correcting error bits within the first set of data with the first set of parity bits where a number of error bits is less than the first number of error bits; and (ii) correcting error bits within the first set of data with the second set of parity bits where the number of error bits is greater than the first number. | 2018-01-11 |
20180011763 | STORAGE DEVICE - A storage device according to an embodiment of the present invention has a plurality of storage nodes, each of which has a plurality of logical ports having send and receive queues for a communication request and an identification number, and an internal network for connecting the plurality of storage nodes with one another. The storage nodes each have, as the logical ports, a data communication logical port used for data communication with other storage nodes and an error communication logical port used to notify the other storage nodes of a state of the data communication logical port. When detecting an occurrence of transition of the data communication logical port to an error state, the storage node uses the error communication logical port to notify the other storage nodes of the identification number and the state of the data communication logical port. | 2018-01-11 |
20180011764 | DISTRIBUTED STORAGE SYSTEM - A first node group including at least three nodes is predefined in a distributed storage system. Each node of the first node group is configured to send data blocks stored in storage devices managed by the node to other nodes belonging to the first node group. A first node is configured to receive data blocks from two or more other nodes in the first node group. The first node is configured to create a redundant code using a combination of data blocks received from the two or more other nodes and store the created redundant code to a storage device different from storage devices holding the data blocks used to create the redundant code. Combinations of data blocks used to create at least two redundant codes in redundant codes created by the first node are different in combination of logical addresses of constituent data blocks. | 2018-01-11 |
20180011765 | CONTROL STATE PRESERVATION DURING TRANSACTIONAL EXECUTION - A method includes saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state. The method also includes permitting updates to the control state for the processor while executing the transactional processing sequence. Examples of updates to the control state include key mask changes, primary region table origin changes, primary segment table origin changes, CPU tracing mode changes, and interrupt mode changes. The method also includes restoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence. In some embodiments, saving the control state comprises saving the current control state to memory corresponding to internal registers for an unused thread or another level of virtualization. A corresponding computer system and computer program product are also disclosed herein. | 2018-01-11 |
20180011766 | DISASTER RECOVERY SYSTEMS AND METHODS - An illustrative method for storing disaster recovery data includes receiving a plurality of copies of data stored by a first memory device. Each of the plurality of copies includes a plurality of blocks of data. The method also includes storing, in a second memory device, the plurality of copies in an object-oriented format, determining, using recovery time objectives, a number of the plurality of copies to be stored in a block-oriented format, and selecting a subset of the plurality of copies having the determined number of the plurality of copies. The method further includes assigning each of the other copies of the plurality of copies to one of a plurality of clusters. Each cluster of the plurality of clusters includes one of the subset of the plurality of copies. The method also includes determining, for each cluster, a copy having a highest number of blocks also present in the other copies of the cluster and storing, in the block-oriented format, the determined copy from each cluster in a third memory device. | 2018-01-11 |
20180011767 | LOAD BALANCING ACROSS MULTIPLE DATA PATHS - Multiple data paths may be available to a data management system for transferring data between a primary storage device and a secondary storage device. The data management system may be able to gain operational advantages by performing load balancing across the multiple data paths. The system may use application layer characteristics of the data for transferring from a primary storage to a backup storage during data backup operation, and correspondingly from a secondary or backup storage system to a primary storage system during restoration. | 2018-01-11 |
20180011768 | CONTROL STATE PRESERVATION DURING TRANSACTIONAL EXECUTION - A method includes saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state. The method also includes permitting updates to the control state for the processor while executing the transactional processing sequence. Examples of updates to the control state include key mask changes, primary region table origin changes, primary segment table origin changes, CPU tracing mode changes, and interrupt mode changes. The method also includes restoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence. In some embodiments, saving the control state comprises saving the current control state to memory corresponding to internal registers for an unused thread or another level of virtualization. A corresponding computer system and computer program product are also disclosed herein. | 2018-01-11 |
20180011769 | DYNAMIC MIRRORING - One or more techniques and/or systems are provided for dynamic mirroring. A first storage node and the second storage node within a first storage cluster may locally mirror data between one another based upon a local failover partnership. The first storage node and a third storage node within a second storage cluster may remotely mirror data between one another based upon a primary disaster recovery partnership. If the third storage node fails, then the first storage node may remotely mirror data to a fourth storage node within the second storage cluster based upon an auxiliary disaster recovery partnership. In this way, data loss protection for the first storage node may be improved, such that the fourth storage node provide clients with access to mirrored data from the first storage node in the event the second storage node and/or the third storage node are unavailable when the first storage node fails. | 2018-01-11 |
20180011770 | MEMORY MANAGEMENT SYSTEM AND METHOD THEREOF - Disclosed are a memory management system and a method thereof. Restricted spare cells are optimally distributed (or allocated) into a physical region and a virtual region in a system for repairing a fault of a memory, thereby increasing a yield of a memory chip. | 2018-01-11 |
20180011771 | APPLICATION UPDATES - Described herein are example systems and computer-implemented methods for monitoring changes to an application. For example, information regarding a change made to an aspect of an application may be received by a processor. It, may be determined if a similarity of the change to a cluster of changes related to the aspect is within a change threshold. Further, the change may be associated with the cluster of changes when the similarity of the change is within the change threshold. It may be further determined if a metric based on a number of changes associated with the cluster of changes is within a cluster range. When the metric within the cluster range, a prototype change may be extracted from the cluster of changes. The application may be updated based on the prototype change when the metric is within the cluster range. | 2018-01-11 |
20180011772 | PERIPHERAL DEVICE OPERATION - Example implementations relate to peripheral device operation. For example, a peripheral device may include a processor. The processor may detect that a computing device is in communication with the peripheral device and send peripheral device information to the computing device. The peripheral device information may specify characteristics associated with the peripheral device including a purpose, capability, and context of the peripheral device. The processor may receive operational data from the computing device, where the operational data may be based on the peripheral device information and may indicate a manner of operating the peripheral device. The processor may perform an operation based on the operational data. | 2018-01-11 |
20180011773 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, COMPUTER PROGRAM, AND STORAGE MEDIUM - An information processing apparatus includes: an operating unit capable of recognizing a peripheral apparatus. The operating unit includes: a first recognizing unit configured to recognize, when a peripheral apparatus is connected to the operating unit and identification information about the connected peripheral apparatus is included in peripheral apparatus information including predetermined identification information, the connected peripheral apparatus as a first peripheral apparatus; and a second recognizing unit configured to recognize, when a peripheral apparatus is connected to the operating unit and the identification information about the connected peripheral apparatus is not included in the peripheral apparatus information, the connected peripheral apparatus as a second peripheral apparatus. | 2018-01-11 |
20180011774 | Session Template Packages for Automated Load Testing - A computer-implemented method includes scanning a clip of messages that includes message requests and message responses arranged in a sequence. The scanning is performed based on one or more search parameters and produces a list of one or more name/value pairs. The clip is utilized to perform a load test on a target website. Each name/value pair has a corresponding value. For each name/value pair in the list a message request in the clip is identified where the corresponding value is first used. Then, looking backwards in the sequence from the message request where the corresponding value is first used, prior message responses are located where the corresponding value is found. An extraction point is specified in the clip for the corresponding value as a latest message response in the sequence where the corresponding value was returned from the target website. The corresponding value is then stored as a property. | 2018-01-11 |
20180011775 | PROVIDING DEBUG INFORMATION ON PRODUCTION CONTAINERS USING DEBUG CONTAINERS - A method and associated system for providing debug information associated with computer software executing in a production container. The production container is replicated as a debug container. The computer software is executed in the production container and the debug container. Executing the computer software includes replicating requests sent to the computer software executing in the production container to the computer software executing in the debug container. Requests from the computer software executing in the production container are stored together with any responses to the stored requests. Debug information generated by the computer software executing in the debug container is stored. | 2018-01-11 |
20180011776 | LIGHTWEIGHT TRACE BASED MEASUREMENT SYSTEMS AND METHODS - An automotive electronics system includes an electronic control unit and a trace adapter. The electronic control unit is configured to receive measurement signals and provide control signals. Additionally, the electronic control unit is configured to generate or provide trace signals by replacing original instructions in a binary image with trace instructions. The trace instructions are functionally equivalent, but trigger providing the trace signals. The trace adapter is coupled to the electronic control unit. The trace adapter is configured to obtain the trace signals from the electronic control unit. | 2018-01-11 |
20180011777 | DEBUGGING STREAMING APPLICATIONS USING DEBUG REGIONS - A method for debugging a streaming application is provided. The method may include establishing, by a processor, a debug region in the streaming application to mark a subgraph for debugging. The method may further include receiving a tuple flow suspension notification from a job control plane. The method may also include displaying a plurality of debugging options for debugging a streaming application on a graphical user interface based on the received tuple flow suspension notification. The method may further include determining a debugging option from the displayed plurality of debugging options based on a user selection on the graphical user interface. The method may also include translating the determined debugging option into a format compatible with the job control plane. The method may further include transmitting the translated debugging option to the job control plane. | 2018-01-11 |
20180011778 | STATIC CODE TESTING OF ACTIVE CODE - A code deployment system deploys code to a set of application systems that execute the application, which may be across several tiers of systems that service requests related to the application. At each system, the application executes and is analyzed during execution to determine active code that is loaded by the application during execution, which may include dynamically-generated code. The active code is then analyzed using static analysis to determine security vulnerabilities and errors in the code that was loaded and operated at each application tier. The active code may also be associated with a specific use case or set of inputs that were applied to the application during the monitoring. | 2018-01-11 |
20180011779 | AUTOMATICALLY GENERATING OBJECT LOCATORS FOR AUTOMATION TEST GENERATION - A device may receive a uniform resource identifier that identifies an object source. The object source may include one or more objects organized in a hierarchy of objects. The device may identify an object, of the one or more objects, to permit extraction of one or more properties of the object. The device may extract the one or more properties of the object based on identifying the object. The device may select a subset of properties, of the one or more properties, to use to generate an object locator based on extracting the one or more properties. The object locator may be associated with locating the object in the hierarchy of objects. The device may generate the object locator based on selecting the subset of properties of the object. | 2018-01-11 |
20180011780 | WEB APPLICATION TEST SCRIPT GENERATION TO TEST SOFTWARE FUNCTIONALITY - According to an example of the present disclosure, a test script to test a web application is generated from a test case and web objects extracted from a web application. A web application testing tool may be invoked to test a functionality of the web application by executing the test script. | 2018-01-11 |
20180011781 | GENERATING PSEUDORANDOM TEST ITEMS FOR SOFTWARE TESTING OF AN APPLICATION UNDER TEST (AUT) - Generating pseudorandom test items for software testing of an Application Under Test (AUT) is provided. In one example, a method comprises selecting an oracle of one or more oracles within a test provider directory structure, the oracle defining one or more logical paths each comprising one or more test conditions and an expected result. A pseudorandom test item having an initial state of empty is generated, and a logical path of the one or more logical paths is selected. The one or more test conditions of the logical path are translated into a constraint. Based on the constraint, a pseudorandom test input is generated and stored in the pseudorandom test item in association with the expected result corresponding to the logical path. | 2018-01-11 |
20180011782 | COMPANION TESTING FOR BODY-AWARE DEVICES - One embodiment provides a method, including: receiving movement data describing physical movement of a person performing a predetermined action; generating, using a processor, classification of the movement data using a test application that predicts output of a wearable device, wherein the test application has been formed using previously collected data that describe the movement of a person performing the predetermined action; determining, using the processor, whether the movement data match the predetermined action in view of the classification; receiving output of a body-aware application that detects and responds to human movement; comparing, using the processor, the output of the body-aware application with the classification; and providing, using the processor, an indication of the comparing of the output of the body-aware application and the classification. | 2018-01-11 |
20180011783 | METHOD AND DEVICE FOR AUTOMATIC TESTING - A method for automatic testing of a piece of software for a mobile device including the following steps: deriving from a description a formalized description, the description includes possible sequences of events of the software and a range for at least one input parameter of the software, the description being used for an implementation of the software; generating from the formalized description a test description; adapting the test description for the mobile device for which the software is to be tested; translating the test specification in a language assigned to the mobile device such that a test described by the test specification can be performed on the mobile device. The method for automatic testing further relates to a corresponding device. | 2018-01-11 |
20180011784 | Method for Testing a Graphical Interface and Corresponding Test System - This test method for validating a specification of a graphical interface consists of developing a scenario file corresponding to the validation test to be performed. The scenario file includes a plurality of instructions, in a natural programming language, each instruction including a function, parameters and an expected state of the graphical interface following the application of the function. The test is automatically performed by interpreting the scenario file so as to generate commands intended for an engine capable of interacting with the graphical interface and monitoring the evolution of its current state, and then analyzing a result file associating each instruction of the scenario file with a result corresponding to the comparison of the current state of the graphical interface following the application of the corresponding command with the expected state. | 2018-01-11 |
20180011785 | DATA STORAGE BASED ON RANK MODULATION IN SINGLE-LEVEL FLASH MEMORY - Technologies are generally described to store data in single-level memory using rank modulation. In some examples, data to be encoded to single-level memory may be represented with a bit ranking for a group of bits. A program vector may then be determined from the bit ranking and partial program characteristics associated with the memory group(s). The memory group(s) may then be programmed according to the program vector. The encoded data may be subsequently retrieved by performing a series of partial programming operations on the memory group(s) to recover the bit ranking and derive the data represented. | 2018-01-11 |
20180011786 | APPARATUS AND METHOD FOR DYNAMICALLY MANAGING MEMORY - The present invention relates to a dynamic memory management method which includes generating an N-dimensional memory address space in which coordinates are in a range of N natural numbers, the sum of which is the number of bits; and mapping a predetermined linear memory address region to an address region in the N-dimensional memory address space. | 2018-01-11 |
20180011787 | PERFORMING GARBAGE COLLECTION ON AN OBJECT ARRAY USING ARRAY CHUNK REFERENCES - Techniques for performing garbage collection on an object array using array chunk references is described. A garbage collector (GC) thread identifies an object array to be processed. The GC thread divides the object array into array chunks. The GC thread generates array chunk references corresponding respectively to the array chunks. Each array chunk reference comprises: (a) chunk start bits representing a memory address of a start of a corresponding array chunk, and (b) chunk length bits representing a chunk length of the corresponding array chunk. The GC thread pushes the array chunk references onto the processing stack. A single processing stack concurrently stores multiple array chunk references, associated with a same object array. One or more of the array chunk references, that are associated with the same object array and stored on the processing stack, may be distributed to other GC threads for processing. | 2018-01-11 |
20180011788 | REDUCING IDLE RESOURCE USAGE - A method, computer program product, and system for reallocating resources of an idle application or program includes a computer for running an application or a program and starting a predetermined time interval. The computer increases a number counter for each event triggered during the predetermined time interval, and the event is a predetermined trigger that is activated during the running of the application or program. The method and system includes comparing a total number of events that occur during the predetermined time interval to a threshold value. The total number of events is the value of the number counter at the end of the predetermined interval. In response to determining, by the computer, the total number of events being below the threshold value, releasing resources allocated to the program by activating, using the computer, either: i) a garbage collector application, or ii) a resource release application. | 2018-01-11 |
20180011789 | REDUCING IDLE RESOURCE USAGE - A method, computer program product, and system for reallocating resources of an idle application or program includes a computer for running an application or a program and starting a predetermined time interval. The computer increases a number counter for each event triggered during the predetermined time interval, and the event is a predetermined trigger that is activated during the running of the application or program. The method and system includes comparing a total number of events that occur during the predetermined time interval to a threshold value. The total number of events is the value of the number counter at the end of the predetermined interval. In response to determining, by the computer, the total number of events being below the threshold value, releasing resources allocated to the program by activating, using the computer, either: i) a garbage collector application, or ii) a resource release application. | 2018-01-11 |
20180011790 | USING DATA PATTERN TO MARK CACHE LINES AS INVALID - An apparatus includes a cache controller, the cache controller to receive, from a requestor, a memory access request referencing a memory address of a memory. The cache controller may identify a cache entry associated with the memory address, and responsive to determining that a first data item stored in the cache entry matches a data pattern indicating cache entry invalidity, read a second data item from a memory location identified by the memory address. The cache controller may then return, to the requestor, a response comprising the second data item. | 2018-01-11 |
20180011791 | SYSTEMS AND METHODS FOR MAINTAINING THE COHERENCY OF A STORE COALESCING CACHE AND A LOAD CACHE - A method for maintaining the coherency of a store coalescing cache and a load cache is disclosed. As a part of the method, responsive to a write-back of an entry from a level one store coalescing cache to a level two cache, the entry is written into the level two cache and into the level one load cache. The writing of the entry into the level two cache and into the level one load cache is executed at the speed of access of the level two cache. | 2018-01-11 |
20180011792 | Method and Apparatus for Shared Virtual Memory to Manage Data Coherency in a Heterogeneous Processing System - One embodiment provides for a heterogeneous computing device comprising a first processor coupled with a second processor, wherein one or more of the first or second processor includes graphics processing logic; wherein each of the first processor and the second processor includes first logic to perform virtual to physical memory address translation; and wherein the first logic includes cache coherency state for a block of memory associated with a virtual memory address. | 2018-01-11 |
20180011793 | SUPPORTING FAULT INFORMATION DELIVERY - A processor implementing techniques to supporting fault information delivery is disclosed. In one embodiment, the processor includes a memory controller unit to access an enclave page cache (EPC) and a processor core coupled to the memory controller unit. The processor core to detect a fault associated with accessing the EPC and generate an error code associated with the fault. The error code reflects an EPC-related fault cause. The processor core is further to encode the error code into a data structure associated with the processor core. The data structure is for monitoring a hardware state related to the processor core. | 2018-01-11 |
20180011794 | METHOD AND SYSTEM FOR EFFICIENT COMMUNICATION AND COMMAND SYSTEM FOR DEFERRED OPERATION - A method and system for efficiently executing a delegate of a program by a processor coupled to an external memory. A payload including state data or command data is bound with a program delegate. The payload is mapped with the delegate via the payload identifier. The payload is pushed to a repository buffer in the external memory. The payload is flushed by reading the payload identifier and loading the payload from the repository buffer. The delegate is executed using the loaded payload. | 2018-01-11 |
20180011795 | INFORMATION PROCESSING APPARATUS AND CACHE INFORMATION OUTPUT METHOD - An information processing apparatus includes a memory, and a processor coupled to the memory and configured to count first number indicating storing a plurality of arrays of data to each of cash lines, the data being accessed in accordance with execution of a program, and count second number indicating cache thrashing to the cache lines when the first number exceeds number of ways of cache. | 2018-01-11 |
20180011796 | Apparatus for Hardware Implementation of Heterogeneous Decompression Processing - A processor includes a memory hierarchy, buffer, and a decompressor. The decompressor includes circuitry to read elements to be decompressed according to a compression scheme, parse the elements to identify literals and matches, and, with the literals and matches, generate an intermediate token stream formatted for software-based copying of the literals and matches to produce decompressed data. The intermediate token stream is to include a format for multiple tokens that are to be written in parallel with each other, and another format for tokens that include a data dependency upon themselves. | 2018-01-11 |
20180011797 | MEMORY SHARING METHOD OF VIRTUAL MACHINES BASED ON COMBINATION OF KSM AND PASS-THROUGH - A memory sharing method of virtual machines through the combination of KSM and pass-through, including: a virtual machine manager judging whether operating systems of guests use IOMMU, if not, not participating in shared mapping of a KSM technology; if yes, judging memory pages of each guest to confirm whether the pages are mapping pages, if yes, remain the mapping pages into a host; and if not, on the premise of keeping the properties of Pass-through, using the KSM technology for all non-mapping pages to merge the memory pages with same contents among various virtual machines and perform write protection processing simultaneously. The guest memory pages are divided into those special for DMA and those for non-DMA purpose, then the KSM technology is only selectively applied to the non-DMA pages, and on the premise of keeping the properties of Pass-through, the object of saving memory resources is achieved simultaneously. | 2018-01-11 |
20180011798 | MEMORY HEAPS IN A MEMORY MODEL FOR A UNIFIED COMPUTING SYSTEM - A method and system for allocating memory to a memory operation executed by a processor in a computer arrangement having a first processor configured for unified operation with a second processor. The method includes receiving a memory operation from a processor and mapping the memory operation to one of a plurality of memory heaps. The mapping produces a mapping result. The method also includes providing the mapping result to the processor. | 2018-01-11 |
20180011799 | ADJUSTING ACTIVE CACHE SIZE BASED ON CACHE USAGE - Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations. | 2018-01-11 |
20180011800 | POWER SAVING METHOD AND APPARATUS FOR FIRST IN FIRST OUT (FIFO) MEMORIES - In various embodiments, apparatuses and methods are disclosed to keep a memory clock gated when the data for a current memory address is the same as the data in the immediate previous memory address. For a write function, new data will only be written into the current memory address if it is different from the data in the immediate previous memory address. Similarly, for a read function, the data will only be read out of the current memory address if it is different from the data in the immediate previous memory address. Each row in the memory may have one associated status bit outside the memory. Data may only be written to or read from the current memory address when the status bit is set. Clock gating the memory ports may reduce the overall power consumption of the memory. | 2018-01-11 |
20180011801 | APPLICATION-DRIVEN STORAGE SYSTEMS FOR A COMPUTING SYSTEM - Systems and methods that allow secure application-driven arbitrary compute in storage devices in a cloud-based computing system are provided. A computing system including a compute controller configured to: (1) provide access to host compute resources, and (2) operate in at least one of a first mode or a second mode is provided. The computing system may further include a storage controller configured to provide access to storage systems including storage components, at least one compute component, and at least one cryptographic component. In the first mode, the host compute resources may be configured to execute at least a first operation on at least a first set of data stored in at least one of the storage components. In the second mode, the at least one compute component may be configured to execute at least a second operation on at least a second set of data. | 2018-01-11 |
20180011802 | SELECTIVE MEMORY ENCRYPTION - In one example in accordance with the present disclosure, a method may include receiving, by a processor on a system on a chip (SoC), a request to encrypt a subset of data accessed by a process. The method may also include receiving, at a page encryption hardware unit of the SoC, a system call from an operating system on behalf of the process, to generate an encrypted memory page corresponding to the subset of data. The method may also include generating, by the page encryption hardware unit, an encryption/decryption key for the first physical memory address. The encryption/decryption key may not be accessible by the operating system. The method may also include encrypting, by the page encryption hardware unit, the subset of data to the physical memory address using the encryption/decryption key and storing, by the page encryption hardware unit, the encryption/decryption key in a key store. | 2018-01-11 |
20180011803 | Secrecy System And Decryption Method Of On-Chip Data Stream Of Nonvolatile FPGA - A secrecy system and a decryption method of on-chip data stream of nonvolatile FPGA are provided in the present invention. The nonvolatile memory module of the system is configured to only allow the full erase operation. After the full erase operation is finished, the nonvolatile memory module gets into the initial state. Only the operation to the nonvolatile memory module under the initial state is effective, and thereby the encryption region unit is arranged in the nonvolatile memory module. Only the decryption data written into the encryption region unit under the initial state can make the nonvolatile memory module to be readable, so that the decryption of the system is finished, which greatly improves the secrecy precision. | 2018-01-11 |
20180011804 | Inter-Process Signaling Mechanism - The disclosed embodiments provide a mechanism to support implementation of semaphores or messaging signals between masters in a multi-master system, or between tasks in a single master system. A semaphore flag register contains one or more bits indicating whether resources are free or busy. The register is aliased to allow atomic read-and-clear of individual bits in the register. Masters poll the status of a resource until the resource reads as free. Alternatively, interrupts or events per master can be implemented to indicate availability of a resource. | 2018-01-11 |
20180011805 | Memory controller that uses a specific timing reference signal in connection with a data brust following a specified idle period - Apparatus and methods for operation of a memory controller, memory device and system are described. During operation, the memory controller transmits a read command which specifies that a memory device output data accessed from a memory core. This read command contains information which specifies whether the memory device is to commence outputting of a timing reference signal prior to commencing outputting of the data. The memory controller receives the timing reference signal if the information specified that the memory device output the timing reference signal. The memory controller subsequently samples the data output from the memory device based on information provided by the timing reference signal output from the memory device. | 2018-01-11 |
20180011806 | SUPPORTING DIFFERENT TYPES OF MEMORY DEVICES - A computing system for supporting a plurality of different types of memory devices includes a memory voltage regulator. The memory voltage regulator adjusts a supply voltage to a requisite voltage for a detected memory device based on serial presence detect (SPD) data. The computing system further includes a memory controller that supports a plurality of types of memory devices. The memory controller receives data regarding the type of the detected memory device, and controls input/output signals relative to the type of the detected memory device based on the SPD data and the GPIO data of the detected memory device. | 2018-01-11 |
20180011807 | LOW LATENCY EFFICIENT SHARING OF RESOURCES IN MULTI-SERVER ECOSYSTEMS - A method is provided in one example embodiment and includes receiving by a network element a request from a network device connected to the network element to update a shared resource maintained by the network element; subsequent to the receipt, identifying a Base Address Register Resource Table (“BRT”) element assigned to a Peripheral Component Interconnect (“PCI”) adapter of the network element associated with the network device, wherein the BRT points to the shared resource; changing an attribute of the identified BRT from read-only to read/write to enable the identified BRT to be written by the network device; and notifying the network device that the attribute of the identified BRT has been changed, thereby enabling the network device to update the shared resource via a Base Address Register (“BAR”) comprising the identified BRT. | 2018-01-11 |
20180011808 | OBTAINING OPTICAL SIGNAL HEALTH DATA IN A STORAGE AREA NETWORK - An aspect of obtaining optical signal health data in a SAN includes receiving, by a computer processor, a request for data corresponding to current operational characteristics of elements of a storage area network to which a host system computer has access. A further aspect includes instantiating, by the computer processor, a virtual host bus adapter interface on the host system computer, transmitting, via the virtual host bus adapter interface, the request to the elements in the portion of the storage area network, aggregating data received from each of the elements, and displaying the aggregated data via the computer processor. | 2018-01-11 |
20180011809 | MOBILE DEVICE AND METHOD FOR READING UART DATA - Disclosed is a mobile device which comprises a CPU, a USB Type-C interface and a switching circuit. The switching circuit is configured to switch a connection line of two preset pins of the USB Type-C interface to connect UART TxD and UART RxD pins of the CPU, upon detecting that a UART cable is inserted into the USB Type-C interface. | 2018-01-11 |
20180011810 | MULTI-CHANNEL PERIPHERAL INTERCONNECT SUPPORTING SIMULTANEOUS VIDEO AND BUS PROTOCOLS - A method includes generating, by a control unit of a first device, a handshaking signal to be transmitted to a second device via a second channel. The method further includes based on the handshaking signal being acknowledged by the second device, configuring, by the control unit, the second channel to communicate non-display data and configuring a first channel connecting the first device to the second device to selectively communicate either display data or non-display data; and based on the handshaking signal being not acknowledged by the second device, configuring, by the control unit, the first channel to communicate display data. | 2018-01-11 |
20180011811 | REDIRECTION OF LANE RESOURCES - An apparatus includes a pass-through module that includes connector pins to connect with at least one active portion of a motherboard connector and to separately connect with at least one inactive portion of the motherboard connector. A routing function on the pass-through module redirects a set of bidirectional lanes from the connector pins connected to the active portion of the motherboard connector to the connector pins connected to the inactive portion of the motherboard connector to enable a connection of the set of bidirectional lanes to at least one other motherboard resource connected to the inactive portion of the motherboard connector. | 2018-01-11 |
20180011812 | INFORMATION PROCESSING APPARATUS - An information processing device having a processor and memory, and including one or more accelerators and one or more storage devices, wherein: the information processing device has one network for connecting the processor, the accelerators, and the storage devices; the storage devices have an initialization interface for accepting an initialization instruction from the processor, and an I/O issuance interface for issuing an I/O command; and the processor notifies the accelerators of the address of the initialization interface or the address of the I/O issuance interface. | 2018-01-11 |
20180011813 | SERIAL MID-SPEED INTERFACE - In accordance with embodiments disclosed herein, there is provided systems and methods for a serial mid-speed interface. A first component includes a phase-locked loop (PLL) to receive an input clock signal and to output an output signal, an interface controller including a clock-management state machine, and a transmitter. The interface controller is to receive the input clock signal, receive the output signal from the PLL, and generate a speed-switch packet. The transmitter is to transmit a first plurality of packets to a second component at a clock rate based on the clock signal via a mid-speed interface, transmit the speed-switch packet to the second component, and transmit a second plurality of packets to the second component at a PLL rate based on the output signal, where the PLL rate is greater than the clock rate. | 2018-01-11 |
20180011814 | MOTHERBOARD MODULE HAVING SWITCHABLE PCI-E LANE - A motherboard module having switchable PCI-E lanes includes a CPU, a first PCI-E slot, a second PCI-E slot, a first switch, and a second switch. 1st to a-th processor pin sets of the CPU are switchably electrically connected to 1st to a-th first PCI-E pin sets of the first PCI-E slot or (2N−a+1)th to 2N-th second PCI-E pin sets of the second PCI-E slot via the first switch to form PCI-E lanes whose number is a. (a+1)-th to 2N-th processor pin sets of the CPU are connected to the second input terminal of the second switch, and the second output terminal of the second switch is switchably electrically connected to (a+1)-th to 2N-th first PCI-E pin sets of the first PCI-E slot or 1st to (2N−a)th second PCI-E pin sets of the second PCI-E slot to form PCI-E lanes whose number is 2N−a, wherein 1 | 2018-01-11 |
20180011815 | ACTIVE STABILITY DEVICES AND SYSTEMS FOR MOBILE DEVICES - A system for use with a mobile device includes at least one sensor to sense a variable related to tilting of the mobile device and at least one activatable system in operative connection with the sensor. The at least one activatable system increases stability of the mobile device upon actuation/change in state thereof on the basis of data measured by the at least one sensor. A variable related to tilting includes variables that indicate concurrent, actual tilting as described herein as well as variables predictive of imminent tilting. Activatable systems hereof change state upon actuation or activation to increase stability of the mobile device by reducing, eliminating or preventing tilting. | 2018-01-11 |
20180011816 | METHODS AND SYSTEMS FOR CALCULATING UNCERTAINTY - Disclosed are methods and systems for performing uncertainty calculations. For example, a numeric value and an error range associated with the numeric value are converted by a processor into a trans-imaginary input dual which is a hybrid of numeric and geometric information having real and complex numbers. A dual calculation is performed using the trans-imaginary input dual to produce a trans-imaginary output dual, and the processor then converts the trans-imaginary output dual to a real number output numeric value that includes both a real number and real number error range or uncertainty associated with that real number. | 2018-01-11 |
20180011817 | GENERATING A VISUAL LAYOUT TREE USING CACHING TECHNIQUES - A layout engine generates a visual layout tree for a visual description tree of a document whose content is to be rendered in one or more display areas. A visual description tree describes the content of a document along with formatting and logical arrangement the content. Visual description nodes of a visual description tree represent the organization of the constituent elements a document hierarchically. A visual layout tree defines the layout of a document within one or more display areas. Visual layout nodes of a visual layout tree represent the layout of the content of a document hierarchically. The layout engine receives as input a visual description tree and generates the corresponding visual layout tree. A rendering engine then inputs the visual layout tree and generates a graphics buffer containing the pixels representing the layout of the document as defined by the visual layout tree. | 2018-01-11 |
20180011818 | Webpage Update Method And Apparatus - A webpage update method and apparatus includes setting a dynamic area according to a visible area of a browser, then using an area, corresponding to the dynamic area, in a target webpage as an area to be rendered, storing webpage update content generated in the area to be rendered in a first preset storage area, so as to read, at one time, the webpage update content in the first preset storage area, to complete rendering of the area to be rendered, thereby ensuring that the content currently displayed in the visible area is updated content. Meanwhile, webpage update content generated in an area other than the area to be rendered is stored in a second preset storage area, and rendering of the corresponding area is temporarily not performed, so that a working amount of a rendering operation can be reduced. | 2018-01-11 |
20180011819 | CLIENT-SIDE WEB SERVICE PROVIDER - To facilitate client access to web services, a server may be configured to package or bundle a function call for a web service with associated information. The resulting function call package may be distributed to multiple clients. Such a package may include parameters for the function call and/or software code that can facilitate a client web site to make the function call. The package may also include branding information associated with the web service. Results of the function call can be presented through the client web site with the branding information associated with the web service, allowing a third party to retain branding for their web service provided through the client web site. | 2018-01-11 |
20180011820 | EFFICIENT HANDLING OF BI-DIRECTIONAL DATA - A tool for standardized layout transformations of BIDI data exchanged between legacy and modern systems is provided. The tool retrieves client connection information from a client request for data. The tool determines, based, at least in part, on the client connection information, a client application's operating system. The tool determines whether the data requested in the client request is BIDI data. Responsive to a determination that the data requested is BIDI data, the tool initiates a layout transformation of the data requested at a single point within the database server. The tool returns transformed BIDI data to the client application. | 2018-01-11 |
20180011821 | PRESENTATION SYSTEM, RECEIVER DEVICE, METHODS THEREOF, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM THEREOF - A presentation method comprises: a sender device capturing screen frames and transmitting the screen frames to a receiver device; the receiver device receiving the screen frames transmitted by the sender device and outputting the screen frames to a display device for displaying the screen frames; establishing a webpage server in the receiver device; and the receiver device storing the screen frames as web pages. Thereby, the audiences can use the browsers of mobile Internet devices to link with the webpage server and browse the web pages containing the presentation contents. | 2018-01-11 |
20180011822 | APPARATUS AND METHOD FOR OUTPUTTING WEB CONTENT - Disclosed are an apparatus and a method for outputting web content. The apparatus and method can prevent rendering performance from being degraded in some operating systems when web content (web document) is output through a web browser. | 2018-01-11 |
20180011823 | MANAGEMENT OF BUILDING PLAN DOCUMENTS UTILIZING COMMENTS AND A CORRECTION LIST - Systems and methods for managing and processing building plan documents. An electronic building plan document including a plurality of plan sheets is received. A first plan sheet is provided for display. A user interface is provided via which a user can select or enter a first comment associated with the first plan sheet. A user interface is provided via which the user can associate metadata, including a project type or discipline, with the first comment. A first plurality of comments, including the first comment, is stored in association with respective metadata. A user interface via which the user can select, by category and/or subcategory a plurality of comments to be included in a plan correction list, and a correction list is generated including a plurality of comments specified by a plurality of users. | 2018-01-11 |
20180011824 | SYSTEM FOR COMPARISON AND MERGING OF VERSIONS IN EDITED WEBSITES AND INTERACTIVE APPLICATIONS - A device implementable in a website design program includes a component based version comparer to compare at least two versions of a website, the at least two versions having components in a set of hierarchies and to generate a difference tree representing the differences in the components between the at least two versions of the same website, where the version comparer preprocesses the components in each single version of the at least two versions of the website to determine at least one of internal geometric, semantic, content and attribute relationships before comparing the components and the relationships between the at least two versions of the website using at least one of semantic, geometrical, content and attribute analysis. The device also includes a version merger to create an integrated version of the two versions of the website based on the difference tree. | 2018-01-11 |
20180011825 | MAINTAINING STATE OF DOCUMENT CONCURRENTLY EDITED BY TWO COMPUTING SYSTEMS BY DETERMINING LOCATIONS OF EDITS BASED ON INVERSE TRANSFORMATIONS - A non-transitory computer-readable storage medium may comprise instructions stored thereon. The instructions, when executed by at least one processor, may be configured to cause the first computing system to at least generate a collection of inverse transformations of an index of the document, the collection of inverse transformations being based on at least one asymmetric local edit to the document by the first computing system, determine a location within the document to perform a remote edit by the second computing system based on the collection of inverse transformations of the index of the document, perform the remote edit to the document at the determined location based on the remote edit by the second computing system, and perform at least one local edit to the document based on the at least one asymmetric local edit to the document by the first computing system. | 2018-01-11 |
20180011826 | ELECTRONIC DEVICE AND CONTROL METHOD THEREOF - An electronic device is provided, which includes a display configured to receive a handwriting by touch and display the received handwriting, and a processor configured to display a handwriting input by at least two handwriting tools selected among different handwriting tools provided through the display by dividing layers of the handwriting according to a handwriting tool, and in response to a selection of a layer among the layers divided according to the handwriting tool, control to edit only a handwriting input by a handwriting tool corresponding to the selected layer. | 2018-01-11 |
20180011827 | ENTITY-BASED DOCUMENT CONTENT CHANGE MANAGEMENT SYSTEM - A content management system is disclosed. The system includes at least one server, non-transitory storage, documents, entity-specific section weights, and entity-specific review thresholds. The system further includes at least two client computer systems that enable a user to access a document for at least one of review or modification. The system will, in response to receipt of an indication that changes have been made to one or more sections of a document, A) determine a change value indicative of a quantity of changes made within each section, B) calculate an entity-specific provenance value by multiplying, on a section basis, the change value within each section by the assigned entity-specific weight value for each section, to produce an entity-specific section value for each section, and then summing the entity-specific section values; and C) when any entity-specific provenance value satisfies a review threshold value, to construct and send a review notification. | 2018-01-11 |
20180011828 | METHOD AND SYSTEM FOR RECOMMENDING MULTIMEDIA SEGMENTS IN MULTIMEDIA CONTENT FOR ANNOTATION - The disclosed embodiments illustrate methods for recommending multimedia segments in multimedia content associated with online educational courses for annotation via a user interface. The method includes extracting one or more features associated with the multimedia content, wherein a feature of the one or more features corresponds to at least a requirement of an exemplary instance. The method further includes selecting a set of multimedia segments from one or more multimedia segments in the multimedia content, based on historical data that corresponds to interaction of one or more users with the multimedia content and the extracted one or more features associated with the multimedia content. Further, the method includes recommending the selected set of multimedia segments in the multimedia content through the user interface displayed on the user-computing device associated with a user, wherein the user annotates the recommended set of multimedia segments in the multimedia content. | 2018-01-11 |
20180011829 | DATA PROCESSING APPARATUS, SYSTEM, DATA PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A data processing apparatus includes a display controller, a first acquisition unit, a second acquisition unit, and a combining unit. The display controller displays, on a display, an operation screen for a process performed in a different device connected via a network. The first acquisition unit acquires an image of the operation screen. The second acquisition unit acquires plural pieces of information received on the operation screen. The combining unit combines an annotation image that represents each of the acquired pieces of information with the image that represents the operation screen. | 2018-01-11 |