52nd week of 2014 patent applcation highlights part 71 |
Patent application number | Title | Published |
20140380259 | LAYOUT MIGRATION WITH HIERARCHICAL SCALE AND BIAS METHOD - A method for migrating a hierarchical layout between manufacturing processes is accomplished without specification of a technology file and design rules. Different scaling factors and bias values in the X and Y directions may be applied to each layer in the source hierarchical layout during the migration. In addition, the target hierarchical layout maintains connectivity, and is free of notches, jogs and small edges. A cell hierarchy tree is created, which guides expansion of the target hierarchical database to resolve issues related to rounding of floating point numbers to integers. Boolean operations are performed to determine the differences between target flat database and the target hierarchical database. The differences are eliminated by modifying the target hierarchical database to match the layout in the flat database. | 2014-12-25 |
20140380260 | Scalable Meta-Data Objects - A method is disclosed for defining an integrated circuit. The method includes generating a digital data file that includes both electrical connection information and physical topology information for a number of circuit components. The method also includes operating a computer to execute a layout generation program. The layout generation program reads the electrical connection and physical topology information for each of the number of circuit components from the digital data file and automatically creates one or more layout structures necessary to form each of the number of circuit components in a semiconductor device fabrication process, such that the one or more layout structures comply with the physical topology information read from the digital data file. The computer is also operated to store the one or more layout structures necessary to form each of the number of circuit components in a digital format on a computer readable medium. | 2014-12-25 |
20140380261 | SEMICONDUCTOR DEVICE RELIABILITY MODEL AND METHODOLOGIES FOR USE THEREOF - Systems and methods for semiconductor device reliability qualification during semiconductor device design. A method is provided that includes defining performance process window bins for a performance window. The method further includes determining at least one failure mechanism for each bin assignment. The method further includes generating different reliability models when the at least one failure mechanism is a function of the process window, and generating common reliability models when the at least one failure mechanism is not the function of the process window. The method further includes identifying at least one risk factor for each bin assignment, and generating aggregate models using a manufacturing line distribution. The method further includes determining a fail rate by bin and optimizing a line center to minimize product fail rate. The method further includes determining a fail rate by bin and scrapping production as a function of a manufacturing line excursion event. | 2014-12-25 |
20140380262 | METHOD OF DESIGNING POWER SUPPLY NETWORK - To design a power supply network of a | 2014-12-25 |
20140380263 | DYNAMICALLY EVOLVING COGNITIVE ARCHITECTURE SYSTEM BASED ON THIRD-PARTY DEVELOPERS - A dynamically evolving cognitive architecture system based on third-party developers is described. A system forms an intent based on a user input, and creates a plan based on the intent. The plan includes a first action object that transforms a first concept object associated with the intent into a second concept object and also includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects. The system executes the plan, and outputs a value associated with the third concept object. | 2014-12-25 |
20140380264 | Computer Platform for Development and Deployment of Sensor-Driven Vehicle Telemetry Applications and Services - A computing platform for intelligent development, deployment and management of vehicle telemetry applications is disclosed herein. Further, the present disclosure provides a method and system that enables provision of Intelligent Transportation Service on the Cloud-based Platform that facilitates creation and deployment of vehicle telemetry applications configured for enabling traffic measurements, traffic shaping, vehicle surveillance and other vehicle related services. | 2014-12-25 |
20140380265 | SOFTWARE CHANGE PROCESS ORCHESTRATION - Systems and methods to manage software change process orchestration are provided. In example embodiments, an indication to initiate a software change process is received. A process required to be performed for the software change process is identified in response to receiving the indication. Using a uniform software logistic protocol capable of accessing tools across different platforms and environments, a tool mapped to the process required to be performed for the software change process is triggered to be executed. | 2014-12-25 |
20140380266 | Parallel Programming of In Memory Database Utilizing Extensible Skeletons - An execution framework allows developers to write sequential computational logic, constrained for the runtime system to efficiently parallelize execution of custom business logic. The framework can be leveraged to overcome limitations in executing low level procedural code, by empowering the system runtime environment to parallelize this code. Embodiments employ algorithmic skeletons in the realm of optimizing/executing data flow graphs of database management systems. By providing an extensible set of algorithmic skeletons the developer of custom logic can select the skeleton appropriate for new custom logic, and then fill in the corresponding computation logic according to the structural template of the skeleton. The skeleton provides a set of constraints known to the execution environment, that can be leveraged by the optimizer and the execution environment to generate parallel optimized execution plans containing custom logic, without the developer having to explicitly describe parallelization of the logic. | 2014-12-25 |
20140380267 | LIFECYCLE MANAGEMENT SYSTEM WITH CONDITIONAL APPROVALS AND CORRESPONDING METHOD - Certain example embodiments concern a lifecycle management system for at least one computing component. A lifecycle model, including lifecycle states assignable to the at least one computing component, is defined. The lifecycle states include a production state. The lifecycle management system ensures the at least one computing component can be productively used only if it is assigned the production state. A lifecycle transition request assigning a requested target lifecycle state of the lifecycle model to the at least one computing component is received. A conditional lifecycle state, different from the requested target lifecycle state, is assigned to the at least one computing component. At least one condition to be fulfilled for the at least one computing component to be assigned the requested target lifecycle state is assigned. The requested target lifecycle state is automatically assigned to the at least one computing component when the at least one condition is fulfilled. | 2014-12-25 |
20140380268 | DYNAMICALLY EVOLVING COGNITIVE ARCHITECTURE SYSTEM PLANNING - Dynamically evolving cognitive architecture system planning is described. A system forms an intent based on a user input, and creates a plan based on the intent. The plan includes a first action object that transforms a first concept object associated with the intent into a second concept object and also includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects. The system executes the plan, and outputs a value associated with the third concept object. | 2014-12-25 |
20140380269 | VERIFICATION OF COMPUTER-EXECUTABLE CODE GENERATED FROM A MODEL - In an embodiment, a model is sliced into a plurality of slices. A slice in the plurality of slices is selected. A portion of code, that corresponds to the selected slice, is identified from code generated from the model. The identified code is verified to be equivalent to the selected slice. Equivalence may include equivalent functionality, equivalent data types, equivalent performance, and/or other forms of equivalence between the selected slice and the identified generated code. | 2014-12-25 |
20140380270 | CODE GENERATION - The present invention provides a method of generating computer executable code using components, each of which corresponds to a respective data manipulation service, typically implemented by a respective entity. The method includes defining a combination of components corresponding to a sequence of data manipulations. The data manipulations are then performed, which can be achieved by requesting the provision of each service from the respective entities in accordance with the defined component combination, thereby causing computer executable code to be generated. | 2014-12-25 |
20140380271 | SYSTEMS AND METHODS FOR INCREMENTAL SOFTWARE DEVELOPMENT - Methods and systems for facilitating incremental software development are disclosed. For example, a method can include receiving a plurality of binary software libraries sufficient for building a software project. A request from a user to modify source code for at least one of the plurality of binary libraries is received. In response to receiving the request, the source code for the at least one of the plurality of binary libraries is retrieved. The source code for the at least one of the plurality of binary libraries is presented to the user. Modified source code for the at least one of the plurality of binary libraries is received. The modified source code is compiled to produce compiled modified code. A revised version of the software project is built using the compiled modified code and the plurality of binary libraries. | 2014-12-25 |
20140380272 | BUSINESS APPLICATION INSPECTION AND MODIFICATION - An inspection and modification window can be displayed within a user interface of a business application being executed in a business application inspection and modification environment. Application code relating to a current navigation point within the business application can be listed within the inspection and modification window. Modifications to the application code can be received via one or more user inputs, and the business application can be executed from the current navigation point to test how the received modifications to the application code affect operation of the business application. | 2014-12-25 |
20140380273 | METHODS, APPARATUSES, AND COMPUTER PROGRAM PRODUCTS FOR FACILITATING A DATA INTERCHANGE PROTOCOL MODELING LANGUAGE - An apparatus for defining a data interchange protocol (DIP) modeling language may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including defining a DIP modeling language specifying data models shared by communication devices. The data models include data specifying criteria to define DIP objects including instances of data. The computer program code may further cause the apparatus to specify features in the data models corresponding to properties or the objects. The features being utilized in part to determine whether properties or objects of a DIP document(s) are valid. The computer program code may further cause the apparatus to evaluate an object(s) of a DIP document(s) to determine whether the object is valid based on analyzing items of data in the data models specifying that objects assigned a type and name are valid. Corresponding methods and computer program products are also provided. | 2014-12-25 |
20140380274 | SOFTWARE LOGISTICS PROTOCOLS - Techniques for using a software logistics protocol include initiating, using the software logistics protocol, a software logistics process, the software logistics protocol being a common application programming interface (API) for controlling and managing the life cycle and operation of a plurality of different software logistics processes; monitoring, using the software logistics protocol, the progress of execution of the software logistics process; and gathering, using the software logistics protocol, output information from the software logistics process after the software logistics process finishes executing. | 2014-12-25 |
20140380275 | MECHANISM FOR COMPATIBILITY AND PRESERVING FRAMEWORK REFACTORING - The subject disclosure relates to enabling the evolution of a framework by providing public surface area factorings for both old and new public surface areas. The factoring can mitigate changes in the implementation of existing distributions of framework. The factoring can also mitigate breaking existing binaries. Further, the factoring can be provided while mitigating a degradation in the security guarantees of the linking model. The factorings can be applied for runtime and/or for a development toolkit. Thus, multiple, almost simultaneous, interoperable views of a framework implementation can be enabled at runtime and/or at design or build time. The views can represent different versions of the framework. | 2014-12-25 |
20140380276 | CLASS AND NAMESPACE DISCOVERY IN PROTOTYPE BASED LANGUAGE - Structure of a prototype-based programming language program is determined based on results of program execution. The structure determined can be implied by a program rather than explicitly declared. For example, classes and namespaces of a prototype-based program can be detected or inferred by identifying patterns that indicate the presence of a class or namespace. Furthermore, members of classes and namespaces can also be determined. | 2014-12-25 |
20140380277 | Risk-based Test Plan Construction - In one embodiment, a method determines a plurality of test cases to test an application and a set of attributes assigned to each test case in the plurality of test cases. The method then calculates a test case risk score for each test case in the plurality of test cases based on the set of attributes associated with each respective test case. The test case risk score quantifies a risk in not executing each respective test case. A subset of the plurality of test cases is selected based on at least a portion of the calculated risk scores. The subset of plurality of test cases is output along with a test plan risk score that quantifies the risk in not executing test cases not included in the plurality of test cases. | 2014-12-25 |
20140380278 | AUTOMATIC FRAMEWORK FOR PARALLEL TESTING ON MULTIPLE TESTING ENVIRONMENTS - A web application is tested on multiple testing environments provided by testing appliances. The testing environments are described by a platform, managing an appliance, a browser used for loading the web application, and a browser version. An automatic testing framework is used for handling the parallelized test execution on all of the testing environments. Within the testing framework the testing environments are defined and prepared for the test execution. A consolidated configuration file is generated for the web application's configuration and the tests classes. The testing framework provides a local server to host the web application which is later loaded in the testing environments. The testing framework processes the test and uses a communication with the appliances to send commands and to execute the test on all of the testing environments. A unified test report is generated that accumulates the results from all of the testing environments. | 2014-12-25 |
20140380279 | PRIORITIZING TEST CASES USING MULTIPLE VARIABLES - A computer identifies lines of code of a product program that have been modified after an initial test of the product program. The computer determines the overlap between lines of code that have been modified and a mapped test case. The computer determines a weighted value for the mapped test case based on two or more of, an environment of the test case, the degree of the overlap, a time the test case was last executed, a time the test case takes to execute, and a priority of a defect. The environment of the test case is configured to replicate a working environment where the product program is to be deployed and includes an operating system, a hardware configuration, and the configuration of the operating system. | 2014-12-25 |
20140380280 | DEBUGGING TOOL WITH PREDICTIVE FAULT LOCATION - Identifying a code segment that has a likelihood of causing a program failure. Program code is executed to a failure point. A plurality of code segments executed in the program code prior to the failure point are identified. Changesets that contain at least one of the identified code segments are identified. The identified code segments are then ranked as a function of likelihood that each respectively ranked identified code segment caused the failure point, based, at least in part, on the identified changesets. In another aspect of the invention, at least some of the ranked code segments along with an indication of the ranking are reported. | 2014-12-25 |
20140380281 | AUTOMATED SOFTWARE TESTING - Disclosed in some examples are systems, machine readable mediums and methods which automate testing of web-based application code by automatically generating test harnesses based on a specified configuration and test script, hosting the test harness, causing the test harness to be run to test the code, and delivering the test results to the user. In some examples, the specified conditions may specify one or more test environments corresponding to an execution environment. This allows users greater flexibility in support of testing libraries and support of testing environments. The end users of the software under test will be provided software that is better tested for many different environments. | 2014-12-25 |
20140380282 | MONITORING MOBILE APPLICATION PERFORMANCE - Aspects of the subject disclosure are directed towards monitoring application performance during actual use, particularly mobile application performance. Described is instrumenting mobile application binaries to automatically identify a critical path in user transactions, including across asynchronous-call boundaries. Trace data is logged by the instrumented application to capture UI manipulations, thread execution, asynchronous calls and callbacks, UI updates and/or thread synchronization. The trace data is analyzed to assist developers in improving application performance. | 2014-12-25 |
20140380283 | Systems and Methods of Detecting Power Bugs - Embodiments of the present invention provide a system and methods for detecting power bugs. In one embodiment, a computer-implemented method for analyzing a computer code includes generating a control flow graph for at least a portion of the computer code at a processor. The method further includes identifying power bugs by traversing the control flow graph if the control flow graph exits without performing a function call to deactivate power to any component of a device configured to execute computer executable instructions based on the computer code after performing a function call to activate power. | 2014-12-25 |
20140380284 | METHOD FOR DEVELOPING AND TESTING A CONNECTIVITY DRIVER FOR AN INSTRUMENT - A computer readable memory medium comprising program instructions for developing and testing a connectivity driver for an instrument is provided. The program instructions are executable by a processor to record transmissions to or from the instrument, place raw data from each recorded transmission into a primary field, and generate a secondary field associated with the primary field. The secondary field includes at least one of: a time that the transmission was transmitted at, a direction the transmission was transmitted in, a content of the transmission, and a state of the connectivity driver during the transmission. The program instructions are also executable by a processor to modify the content of the first or secondary fields, and play the modified transmission from computer readable memory medium in order to debug the communications software. | 2014-12-25 |
20140380285 | DYNAMICALLY EVOLVING COGNITIVE ARCHITECTURE SYSTEM BASED ON A NATURAL LANGUAGE INTENT INTERPRETER - A dynamically evolving cognitive architecture system based on a natural language intent interpreter is described. A system forms an intent based on a user input, and creates a plan based on the intent. The plan includes a first action object that transforms a first concept object associated with the intent into a second concept object and also includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects. The system executes the plan, and outputs a value associated with the third concept object. | 2014-12-25 |
20140380286 | DYNAMICALLY EVOLVING COGNITIVE ARCHITECTURE SYSTEM BASED ON TRAINING BY THIRD-PARTY DEVELOPERS - A dynamically evolving cognitive architecture system based on training by third-party developers is described. A system forms an intent based on a user input, and creates a plan based on the intent. The plan includes a first action object that transforms a first concept object associated with the intent into a second concept object and also includes a second action object that transforms the second concept object into a third concept object associated with a goal of the intent. The first action object and the second action object are selected from multiple action objects. The system executes the plan, and outputs a value associated with the third concept object. | 2014-12-25 |
20140380287 | COMPILATION OF SYSTEM DESIGNS - A method is provided for compiling an HLL program. A command is input that indicates a set of HLL source files to be compiled and a set of functions in the HLL source files that are to be implemented on programmable circuitry of a programmable IC. For a source file including one of the set of functions, a respective netlist is generated from HLL code of each of the set of functions included therein. Interface code is also generated for communication with the netlist. HLL code of the set of functions in the HLL source file is replaced with the generated interface code. Each HLL source file is compiled to produce a respective object file. The object files are linked to generate a program executable on the programmable IC. A configuration data stream is generated that implements each generated netlist on the programmable IC. | 2014-12-25 |
20140380288 | UTILIZING SPECIAL PURPOSE ELEMENTS TO IMPLEMENT A FSM - Apparatus, systems, and methods for a compiler are described. One such compiler generates machine code corresponding to a set of elements including a general purpose element and a special purpose element. The compiler identifies a portion in an arrangement of relationally connected operators that corresponds to a special purpose element. The compiler also determines whether the portion meets a condition to be mapped to the special purpose element. The compiler also converts the arrangement into an automaton comprising a plurality of states, wherein the portion is converted using a special purpose state that corresponds to the special purpose element if the portion meets the condition. The compiler also converts the automaton into machine code. Additional apparatus, systems, and methods are disclosed. | 2014-12-25 |
20140380289 | PLATFORM SPECIFIC OPTIMIZATIONS IN STATIC COMPILERS - Embodiments include systems and methods for generating an application code binary that exploits new platform-specific capabilities, while maintaining backward compatibility with other older platforms. For example, application code is profiled to determine which code regions are main contributors to the runtime execution of the application. For each hot code region, a determination is made as to whether multiple versions of the hot code region should be produced for different target platform models. Each hot code region can be analyzed to determine if benefits can be achieved by exploiting platform-specific capabilities corresponding to each of N platform models, which can result in between one and N versions of that particular hot code region. Navigation instructions are generated as part of the application code binary to permit a target machine to select appropriate versions of the hot code sections at load time, according to the target machine's capabilities. | 2014-12-25 |
20140380290 | EXTRACTING STREAM GRAPH STRUCTURE IN A COMPUTER LANGUAGE BY PRE-EXECUTING A DETERMINISTIC SUBSET - Compile-time recognition of graph structure where graph has arbitrary connectivity and is constructed using recursive computations is provided. In one aspect, the graph structure recognized at compile time may be duplicated at runtime and can then operate on runtime values not known at compile time. | 2014-12-25 |
20140380291 | EXTRACTING STREAM GRAPH STRUCTURE IN A COMPUTER LANGUAGE BY PRE-EXECUTING A DETERMINISTIC SUBSET - Compile-time recognition of graph structure where graph has arbitrary connectivity and is constructed using recursive computations is provided. In one aspect, the graph structure recognized at compile time may be duplicated at runtime and can then operate on runtime values not known at compile time. | 2014-12-25 |
20140380292 | METHOD, DEVICE, AND STORAGE MEDIUM FOR UPGRADING OPERATING SYSTEM - The present disclosure discloses a method, a device, and a storage medium for upgrading an operating system. The method includes: determining a current operating system in use; synchronizing system files of the current operating system to a mirror operating system; obtaining an operating system upgrade package; upgrading the mirror operating system according to the operating system upgrade package; starting the mirror operating system after the mirror operating system is successfully upgraded; and using the mirror operating system as the current operating system. In the present disclosure, the operating system upgrade does not influence normal use of the current operating system, which prevents accidents or errors from occurring in the upgrading of the operating system that might results in a malfunction of the current operating system, and thus increases safety and stability of the operating system. | 2014-12-25 |
20140380293 | METHOD AND INFORMATION PROCESSING APPARATUS FOR EXTRACTING SOFTWARE CORRECTION PATCH - A management server refers to a server information management DB that stores therein information on a plurality of virtual servers generated from a plurality of virtual images and information on software that operates on the virtual servers, and selects other virtual server in conjunction with a particular virtual server from the plurality of virtual servers based on information on the particular virtual server generated from a predetermined virtual image and information on software that operates on the particular virtual server. The management server extracts a patch to be applied to the particular virtual server based on patches applied to the other virtual server. | 2014-12-25 |
20140380294 | METHODS FOR UPGRADING FIRMWARE AND ELECTRONIC DEVICES USING THE SAME - An embodiment of a method for upgrading firmware, being executed by a processing unit, is introduced. Factory settings corresponding to a first firmware version with user configuration values corresponding to a second firmware version to generate combined user configuration values. System initiation for an electronic device is performed using the first firmware version according to the combined user configuration values. | 2014-12-25 |
20140380295 | METHOD AND SYSTEM FOR UPDATING APPLICATION, AND COMPUTER STORAGE MEDIUM THEREOF - A method for updating software application includes: receiving version information that is uploaded by terminals; obtaining latest version information of the software application, and comparing the uploaded version information with the latest version information for obtaining added updating information; and distributing the added updating information to the terminal, the application on the terminal is updated according to the added updating information. A corresponding system for updating software application and computer storage medium is provided as well. | 2014-12-25 |
20140380296 | RE-PROGRAMMING VEHICLE MODULES - A system and method of re-programming one or more modules at a vehicle includes deciding to re-program an infotainment head unit (IHU) or one or more vehicle system modules on a vehicle; accessing a Wi-Fi signal using the IHU; receiving software at the IHU from a remotely-located computer via the Wi-Fi signal; and re-programming the one or more vehicle system modules or the IHU with the received software using the IHU. | 2014-12-25 |
20140380297 | HYPERVISOR SUBPARTITION AS CONCURRENT UPGRADE - A processor-implemented method for a concurrent software service upgrade is provided. The processor implemented method may include receiving a type of service request corresponding to the software service upgrade, determining, by the processor, the type of service request and then generating a plurality of subpartitions corresponding to a hypervisor. The method may further include applying the service request to at least one subpartition within the plurality of subpartitions, wherein the service request is applied to the at least one subpartition based on the type of service request and balancing the system resources among the plurality of subpartitions upon the applying of the service request to the at least one subpartition. | 2014-12-25 |
20140380298 | WIRELESS COMMUNICATION TERMINAL, SOFTWARE UPDATE SYSTEM, AND SOFTWARE UPDATE METHOD - A software update system includes an administration server, a wireless communication terminal, and a wireless-communication key station. The wireless communication terminal is configured to be connected to the administration server through a communication network. The a wireless-communication key station is configured to be positioned between the administration server and the wireless communication terminal, and to perform processing of distributing software of an update object transmitted from the administration server to the wireless communication terminal. | 2014-12-25 |
20140380299 | COMMUNICATION SYSTEM, COMMUNICATION METHOD, AND COMMUNICATION APPARATUS - An aspect of an embodiment of the invention provides a communication apparatus for receiving update data for an application, which carries out data communication of transferring data containing at least any one of video data and audio data over a network, and updating the application using the update data. The communication apparatus includes: a receiving unit configured to receive related information that is information related to the update data; an update unit configured to start downloading the update data based on the related information; and a determining unit configured to make determination about an execution state of the application. The update unit controls resumption and suspension of the downloading of the update data based on a result of the determination made by the determining unit. | 2014-12-25 |
20140380300 | DYNAMIC CONFIGURATION FRAMEWORK - Methods, systems, and computer-readable media for deploying a software module in a dynamic configuration framework are presented. A system may be running a software service, such as a software service that abstracts or transforms requests such that the requests may be serviced by a web service. The system may receive a request to deploy a new software module. In response to the request, the system may retrieve a binary file from a database. The binary file may comprise, for example, a Java Archive (.jar) file. A real-time class loader may then be accessed, where the real-time class loader may be configured to deploy the retrieved binary file. The software module may then be deployed by the real-time class loader using the retrieved binary file. The deployment may be achieved without interrupting the software service being run on the system. | 2014-12-25 |
20140380301 | LAUNCHING A TARGET APPLICATION BASED ON CHARACTERISTICS OF A STRING OF CHARACTERS IN A SOURCE APPLICATION - A method and system for launching a target application. A predefined data type is identified by: determining that a first row of a parser table including a first regular expression formulating a string of characters includes rows ordered from a more general to a more specific regular expression; setting a first regular expression as a regular expression matching the string; resetting the regular expression with a second regular expression, of a next row of the parser table, that formulates the string more specifically than the first regular expression; and selecting the predefined data type in the row of the regular expression as the predefined data type of the string, upon performing the resetting for all rows of the parser table. The target application previously associated with a combination of the identified data type and a source application containing the string is identified and launched with the string as a parameter. | 2014-12-25 |
20140380302 | COMPUTER PROGRAM INSTALLATION ACROSS MULTIPLE MEMORIES - Embodiments herein are directed to a method for installing a program across multiple memories. The method includes calculating a memory space requirement of the program. It may be determined that a first available memory space in a first memory of the first computer system is smaller than the memory space requirement. The first memory is a default memory for installing the program. Upon determining that the first available memory space in the first memory is smaller than the memory space requirement, the method may perform the step of identifying a second memory in communication with the first computer system that has a second available memory space. The first and second available memory spaces, when combined, are sufficient for the memory space requirement to install files of the program. The files of the program may be installed in the first and second memories. | 2014-12-25 |
20140380303 | STORAGE MANAGEMENT FOR A CLUSTER OF INTEGRATED COMPUTING SYSTEMS - Integrated computing systems with independently managed infrastructures including compute nodes and storage nodes form a cluster. Storage resource agents manage storage resources in the cluster. The resource agents identify storage requirements associated with allocation sets for resource consumers dispatched in the cluster, communicate with each other to locate inter-system storage resources that primarily satisfy locality criteria associated with resource consumer workloads, secondarily satisfy allocation set activity criteria associated with the allocation sets, and allocate the storage resources to the resource consumers to satisfy the storage requirements. The storage resource agents may base storage assignments on data placement information from a priority map. Data may be later relocated to alternate storage resources in satisfaction of cluster-wide storage policies, priority determinations, and data access rate determinations. | 2014-12-25 |
20140380304 | METHODS AND SYSTEMS FOR ENERGY MANAGEMENT IN A VIRTUALIZED DATA CENTER - A method and system provisions a plurality of resources of a data center. A violation risk factor for a set of low priority requests can be computed. A utilization factor of a set of activated resources of the data center shall be evaluated. According to a predefined rule base, one or more of the plurality of resources, shall be provisioned for a received high priority request, whereby the predefined rule base defines performing one or more of; a) preempting a set of virtual machines utilizing a subset of the set of activated resources, whereby the set of virtual machines is associated with the set of low priority requests; b) activating a new set of resources; and c) consolidating a plurality of virtual machines, based on the computed violation risk factor and the evaluated utilization factor. | 2014-12-25 |
20140380305 | DEFERRING THE COST OF VIRTUAL STORAGE - In one embodiment, a virtual storage system | 2014-12-25 |
20140380306 | SYSTEM AND METHOD FOR LIVE CONVERSION AND MOVEMENT OF VIRTUAL MACHINE IMAGE AND STATE INFORMATION BETWEEN HYPERVISORS - A system for live conversion and movement of a virtual machine image and state information between hypervisors includes: means for freezing a current state of a source image; means for creating a proxy; means for redirecting any changes made to the source image to a journal of the proxy; means for reading from the source image; means for writing to the journal; means for converting the source image to a target image; means for reattaching the journal to the target image; and means for replaying the journal on the target image. | 2014-12-25 |
20140380307 | PERFORMANCE-DRIVEN RESOURCE MANAGEMENT IN A DISTRIBUTED COMPUTER SYSTEM - A system and method for managing resources in a distributed computer system that includes at least one resource pool for a set of virtual machines (VMs) utilizes a set of desired individual VM-level resource settings that corresponds to target resource allocations for observed performance of an application running in the distributed computer system. The set of desired individual VM-level resource settings are determined by constructing a model for the observed application performance as a function of current VM-level resource allocations and then inverting the function to compute the target resource allocations in order to meet at least one user-defined service level objective (SLO). The set of desired individual VM-level resource settings are used to determine final RP-level resource settings for a resource pool to which the application belongs and final VM-level resource settings for the VMs running under the resource pool, which are then selectively applied. | 2014-12-25 |
20140380308 | METHODS AND APPARATUS TO GENERATE A CUSTOMIZED APPLICATION BLUEPRINT - Methods and apparatus to generate a customized application blueprint are disclosed. An example method includes determining a first virtual machine within an application definition, automatically identifying a property for the first virtual machine, and generating an application blueprint based on the identified property of the virtual machine. | 2014-12-25 |
20140380309 | VIRTUAL MACHINE SYSTEM AND METHOD OF MEASURING PROCESSOR PERFORMANCE - In a virtual machine system where a first stage VM and a second stage VM generated on the first stage VM are executed, a processor is configured to perform a first determination as to whether to physically instruct to start execution caused by a virtual execution start of the second stage VM and a second determination as to whether a physical end is detected as a result of a virtual end of the second stage VM, and calculate an execution time of the second stage VM based on results of the first determination and the second determination. | 2014-12-25 |
20140380310 | SHARING USB KEY BY MULTIPLE VIRTUAL MACHINES LOCATED AT DIFFERENT HOSTS - A system for sharing a USB Key by multiple virtual machines located at different hosts including at least two virtual machine managers, each virtual machine manager including a virtual machine transceiver module which is configured to receive a request for accessing a USB Key from a virtual machine within its host; a storage module which is configured to store an association relationship between a USB Key and the virtual machine authenticated by the USB Key; a verification module which is configured to, in response to judging that the virtual machine of the received request can access the USB Key, transmit the request for accessing the USB Key to a USB Key transceiver module of a virtual machine manager of the host where the USB Key is located; and a USB Key transceiver module which is configured to receive a request for accessing a USB Key, and to transmit an access request to a connected USB Key. | 2014-12-25 |
20140380311 | VIRTUAL MACHINE DEVICE HAVING KEY DRIVEN OBFUSCATION AND METHOD - A virtual machine device | 2014-12-25 |
20140380312 | SYSTEM AND METHOD FOR ON-DEMAND CLONING OF VIRTUAL MACHINES - A system for on-demand cloning of virtual machines (VMs) includes a virtual server to host a number of VMs, the virtual server including at least one master VM. The system also includes a Web server to authenticate a user in response to a request for online access to a new VM on the virtual server. In addition, the system includes a cloning module, in communication with the Web server and the virtual server, to automatically clone the master VM to create a unique VM clone for the user on the virtual server responsive to the request. | 2014-12-25 |
20140380313 | METHOD AND DEVICE FOR LOADING ANDROID VIRTUAL MACHINE APPLICATION - A method and a device for loading a virtual machine application are provided herein. An exemplary method comprises: loading a management object of the virtual machine by the layer-booting object; reading the virtual machine configuration by the management object of the virtual machine; and invoking a creation function of the management object of the virtual machine by the virtual machine configuration and creating an operational instance of the virtual machine. The Android loading method and device for a virtual machine can be used to improve switching speed between instances. | 2014-12-25 |
20140380314 | MANAGEMENT SERVER, PATCH SCHEDULING METHOD, AND RECORDING MEDIUM - A non-transitory computer-readable recording medium has stored therein a patch scheduling program that causes a computer to execute a process. The process includes managing, aggregating, determining, and scheduling. The managing includes managing a system including a plurality of software that control a plurality of virtual machines. The aggregating includes aggregating virtual machines including a similar trend regarding a predetermined index thereof with a mutually-same virtual software. The determining includes determining a time period during which the virtual machines are to be moved, based on the trends of the virtual machines aggregated with the mutually-same virtual software and based on moving time of the move of the virtual machines to a different virtual software included in the plurality of virtual software. The scheduling includes scheduling applying a patch to each of the plurality of virtual software at the determined time periods. | 2014-12-25 |
20140380315 | Transferring Files Using A Virtualized Application - Approaches for transferring a file using a virtualized application. A virtualized application executes within a virtual machine residing on a physical machine. When the virtualized application is instructed to download a file stored external to the physical machine, the virtualized application displays an interface which enables at least a portion of a file system, maintained by a host OS, to be browsed while preventing files stored within the virtual machine to be browsed. Upon the virtualized application receiving input identifying a target location within the file system, the virtualized application stores the file at the target location. The virtualized application may also upload a file stored on the physical machine using an interface which enables at least a portion of a file system of a host OS to be browsed while preventing files in the virtual machine to be browsed. | 2014-12-25 |
20140380316 | TECHNIQUES FOR DYNAMIC DISK PERSONALIZATION - Techniques for dynamic disk personalization are provided. A virtual image that is used to create an instance of a virtual machine (VM) is altered so that disk access operations are intercepted within the VM and redirected to a service that is external to the VM. The external service manages a personalized storage for a principal, the personalized storage used to personalize the virtual image without altering the virtual image. | 2014-12-25 |
20140380317 | SINGLE-PASS PARALLEL PREFIX SCAN WITH DYNAMIC LOOK BACK - One embodiment of the present invention performs a parallel prefix scan in a single pass that incorporates variable look-back. A parallel processing unit (PPU) subdivides a list of inputs into sequentially-ordered segments and assigns each segment to a streaming multiprocessor (SM) included in the PPU. Notably, the SMs may operate in parallel. Each SM executes write operations on a segment descriptor that includes the status, aggregate, and inclusive-prefix associated with the assigned segment. Further, each SM may execute read operations on segment descriptors associated with other segments. In operation, each SM may perform reduction operations to determine a segment-wide aggregate, may perform look-back operations across multiple preceding segments to determine an exclusive-prefix, and may perform a scan seeded with the exclusive prefix to generate output data. Advantageously, the PPU performs one read operation per input, thereby reducing the time required to execute the prefix scan relative to prior-art parallel implementations. | 2014-12-25 |
20140380318 | VIRTUALIZED COMPONENTS IN COMPUTING SYSTEMS - The subject disclosure is directed towards virtual components, e.g., comprising software components such as virtual components of a distributed computing system. Virtual components are available for use by distributed computing system applications, yet managed by the distributed computing system runtime transparent to the application with respect to automatic activation and deactivation on runtime-selected distributed computing system servers. Virtualization of virtual components is based upon mapping virtual components to their physical instantiations that are currently running, such as maintained in a global data store. | 2014-12-25 |
20140380319 | ADDRESS TRANSLATION/SPECIFICATION FIELD FOR HARDWARE ACCELERATOR - Embodiments relate an address translation/specification (ATS) field. An aspect includes receiving a work queue entry from a work queue in a main memory by a hardware accelerator, the work queue entry corresponding to an operation of the hardware accelerator that is requested by user-space software, the work queue entry comprising a first ATS field that describes a structure of the work queue entry. Another aspect includes, based on determining that the first ATS field is consistent with the operation corresponding to the work queue entry and the structure of the work queue entry, executing the operation corresponding to the work queue entry by the hardware accelerator. Another aspect includes, based on determining that the first ATS field is not consistent with the operation corresponding to the work queue entry and the structure of the work queue entry, rejecting the work queue entry by the hardware accelerator. | 2014-12-25 |
20140380320 | JOINT OPTIMIZATION OF MULTIPLE PHASES IN LARGE DATA PROCESSING - Methods and arrangements for task scheduling. A plurality of jobs is received, each job comprising at least a map phase, a copy/shuffle phase and a reduce phase. For each job, there are determined a map phase execution time and a copy/shuffle phase execution time. Each job is classified into at least one group based on at least one of: the determined map phase execution time and the determined copy/shuffle phase execution time. The plurality of jobs are executed via processor sharing, and the executing includes determining a similarity measure between jobs based on current job execution progress. Other variants and embodiments are broadly contemplated herein. | 2014-12-25 |
20140380321 | ENERGY EFFICIENT JOB SCHEDULING - The subject disclosure is directed towards scheduling jobs with a speed for running a processor(s) having variable speeds to save energy yet complete in time, in which the volume of the job is not known in advance, that is, in a non-clairvoyant setting. A non-clairvoyant algorithm uses an existing clairvoyant algorithm to determine the speed based upon information known from running one or more jobs, in full or in part. Also described is rounding jobs based upon their densities into rounding queues so that a hybrid of highest density first rules and FIFO rules may be used to obtain information used by the clairvoyant algorithm. | 2014-12-25 |
20140380322 | Task Scheduling for Highly Concurrent Analytical and Transaction Workloads - Systems and method for a task scheduler with dynamic adjustment of concurrency levels and task granularity are disclosed for improved execution of highly concurrent analytical and transactional systems. The task scheduler can avoid both over commitment and underutilization of computing resources by monitoring and controlling the number of active worker threads. The number of active worker threads can be adapted to avoid underutilization of computing resources by giving the OS control of additional worker threads processing blocked application tasks. The task scheduler can dynamically determine a number of parallel operations for a particular task based on the number of available threads. The number of available worker threads can be determined based on the average availability of worker threads in the recent history of the application. Based on the number of available worker threads, the partitionable operation can be partitioned into a number of sub operations and executed in parallel. | 2014-12-25 |
20140380323 | CONSISTENT MODELING AND EXECUTION OF TIME CONSTRUCTS IN BUSINESS PROCESSES - Embodiments are directed to executing a workflow using a virtualized clock and to ensuring idempotency and correctness among workflow processes. In one scenario, a computer system a computer system determines that a workflow session has been initialized. The workflow session runs as a set of episodes, where each episode includes one or more pulses of work that are performed when triggered by an event. Each workflow session is processed according to a virtualized clock that keeps a virtual session time for the workflow session. The computer system receives an event that includes an indication of the time the event was generated, and then accesses the received event to determine which pulses of work are to be performed as part of a workflow session episode. The computer system then executes the determined pulses of work according to the virtual session time indicated by the virtualized clock. | 2014-12-25 |
20140380324 | BURST-MODE ADMISSION CONTROL USING TOKEN BUCKETS - Methods and apparatus for burst-mode admission control using token buckets are disclosed. A work request (such as a read or a write) directed to a work target is received. Based on a first criterion, a determination is made that the work target is in a burst mode of operation. A token population of a burst-mode token bucket is determined, and if the population meets a second criterion, the work request is accepted for execution. | 2014-12-25 |
20140380325 | MULTIPROCESSOR SYSTEM - A multiprocessor system includes: a logical processor assigned to any one of physical processors to be executed on the multiprocessor system; and a scheduler managing the assignment of the logical processor to one of the first kind physical processor and the second kind physical processor. The logical processor has a flag for holding information indicating an internal state of the logical processor. The scheduler determines the assignment of the logical processor to one of the first kind physical processor and the second kind physical processor, based on presence or absence of an occurrence of a predetermined event and the information held in the flag. | 2014-12-25 |
20140380326 | COMPUTER PRODUCT, MULTICORE PROCESSOR SYSTEM, AND SCHEDULING METHOD - A non-transitory, computer-readable recording medium stores a scheduling program that causes a first core among multiple cores to execute a process that includes selecting a core from the cores; referring to a storage unit to assign first software assigned to the selected core, to a second core different from the selected core and among the cores, the storage unit being configured to store for each core among the cores, identification information of software assigned to the core; and assigning second software to the selected core as a result of assigning the first software to the second core, the second software being assigned when an activation request for the second software is accepted. | 2014-12-25 |
20140380327 | DEVICE AND METHOD FOR SYNCHRONIZING TASKS EXECUTED IN PARALLEL ON A PLATFORM COMPRISING SEVERAL CALCULATION UNITS - A device and method for synchronizing tasks executed in parallel on a platform comprising comprises several computation units. The tasks are apt to be preempted by the operating system of the platform, and the device comprises at least one register and one recording module installed in the form of circuits on said platform, said recording module being suitable for storing a relationship between a condition to be satisfied regarding the value recorded by one of said registers and one or more computation tasks, the device comprising a dynamic allocation module installed in the form of circuits on the platform and configured to choose a computation unit from among computation units of the platform when said condition is fulfilled, and for launching the execution on the chosen computation unit of a software function for searching for the tasks on standby awaiting the fulfillment of the condition and notifications of said tasks. | 2014-12-25 |
20140380328 | SOFTWARE MANAGEMENT SYSTEM AND COMPUTER SYSTEM - A computer system includes: a physical computer including plural physical processors, a peripheral device connected to the plural physical processors, and a memory connected to the plural physical processors; and a management computer connected to the physical computer. The physical computer includes plural physical processor environments on each of which a virtual computer can be built, and the management computer includes an environment table indicating correspondence between plural physical processor environments each of which has the physical processor and on each of which a virtual computer can be built and an executable software program in each of the physical processor environments. When a specific software program is executed in the physical computer, a physical processor environment corresponding to a software program to be executed is selected from the plural physical processor environments by the environment table, and a virtual computer is built on the selected physical processor environment. | 2014-12-25 |
20140380329 | CONTROLLING SPRINTING FOR THERMAL CAPACITY BOOSTED SYSTEMS - A method and apparatus are described for performing sprinting in a processor. An analyzer in the processor may monitor thermal capacity remaining in the processor while not sprinting. When the remaining thermal capacity is sufficient to support sprinting, the analyzer may perform sprinting of a new workload when a benefit derived by sprinting the new workload exceeds a threshold and does not cause the remaining thermal capacity in the processor to be exhausted. The analyzer may perform sprinting of the new workload in accordance with sprinting parameters determined for the new workload. The analyzer may continue to monitor the remaining thermal capacity while not sprinting when the benefit derived by sprinting the new workload does not exceed the threshold. | 2014-12-25 |
20140380330 | TOKEN SHARING MECHANISMS FOR BURST-MODE OPERATIONS - Methods and apparatus for token-sharing mechanisms for burst-mode operations are disclosed. A first and a second token bucket are respectively configured for admission control at a first and a second work target. A number of tokens to be transferred between the first bucket and the second bucket, as well as the direction of the transfer, are determined, for example based on messages exchanged between the work targets. The token transfer is initiated, and admission control decisions at the work targets are made based on the token population resulting from the transfer. | 2014-12-25 |
20140380331 | SYSTEM AND METHOD FOR RECEIVING ANALYSIS REQUESTS AND CONFIGURING ANALYTICS SYSTEMS - A method for analyzing data is disclosed that includes receiving an analysis request to analyze selected data corresponding to one or more monitored assets, wherein the analysis request includes one or more parameters corresponding to performance categories of computing resources for processing the analysis request, the performance categories include at least one of a time for processing the analysis request or a cost for processing the analysis request; determining a computing resource allocation plan for processing the analysis request based on the one or more parameters; and processing the analysis request using the determined computing resource allocation plan to provide analysis results. Also disclosed is an analytic router that includes a mapper, an estimator, an optimizer, and a resource provisioner. | 2014-12-25 |
20140380332 | Managing Service Level Objectives for Storage Workloads - Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques. | 2014-12-25 |
20140380333 | DETECTION APPARATUS, NOTIFICATION METHOD, AND COMPUTER PRODUCT - A coprocessor stores to local memory, a driver execution start time, for each execution start of drivers. If a CPU call process is executed during the execution of driver A, the coprocessor calculates the difference of the execution start time and the current time, for drivers B and C. Taking driver C as an example, the coprocessor adds to the difference calculated for the driver C, a processing time required for the CPU call process of driver A and a processing time required for a normal process of driver B. The coprocessor determines whether respective addition results for driver C comply with respective time constraints. If it is determined that an addition result for the driver C cannot comply with the time constraint, and the coprocessor sends an execution request for driver C to another coprocessor. | 2014-12-25 |
20140380334 | HARDWARE MANAGEMENT COMMUNICATION PROTOCOL - A simplified hardware management communication protocol comprises defined request packets, which are utilized to transmit requests to lower layers of management functionality or to managed resources, and it also comprises defined response packets, which are utilized to transmit responses back to the source of the request. A request packet comprises an identification of a type of device, an identifier of that device, an address of the sending entity, a session identifier, a sequence number, a function identifier, and a payload that comprises encapsulated communications or data directed to the request target. A response packet can comprise an identification of the sender of the request, a session identifier, a sequence number, a completion code identifying whether and how the request was completed, and a payload. Managed asset type specific drivers translate into communications utilizing communicational protocols that are specific to the managed assets. | 2014-12-25 |
20140380335 | METHOD AND SYSTEM FOR SHARING A HOTKEY BETWEEN APPLICATION INSTANCES - According to an example, when there is a hotkey message of a hotkey, an application instance that registers the hotkey receives the hotkey message, distributes the hotkey message to an application instance that does not register the hotkey, determines whether there is an application instance that does not register the hotkey and is to process the hotkey message; when there is the application instance that does not register the hotkey and is to process the hotkey message, receives feedback information about processing the hotkey message returned from the application instance that does not register the hotkey; and when there is not the application instance that does not register the hotkey and is to process the hotkey message, processes the hotkey message. | 2014-12-25 |
20140380336 | METHOD FOR SIMULATING SCREEN SHARING FOR MULTIPLE APPLICATIONS RUNNING CONCURRENTLY ON A MOBILE PLATFORM - A system for sharing a physical display screen among multiple applications on a mobile platform includes an Internet-connected client device and software executing on the client device from a non-transitory physical medium, the software providing a first function assigning dominancy to one of the multiple running applications, a second function mitigating application background transparency among the multiple running applications, a third function establishing a messaging mechanism and protocol between the multiple running applications, and a fourth function enabling the dominant application to intercept digital input directed toward individual ones of the multiple running applications and to dispatch the input to the appropriate application. | 2014-12-25 |
20140380337 | EVENT-DRIVEN APPLICATION SYSTEMS AND METHODS - An event-driven application system and method includes an activity-function engine in communication with an event-driven application. The activity-function engine includes an event matching list having a plurality of input-event to activity-function mappings. The event-driven application includes at least one programmable object having at least one activity-function. The activity-function engine is configured to receive an input-event from the application, to match the input-event to the at least one activity-function based on the plurality of input-event to activity-function mappings, and to execute the at least one activity-function. | 2014-12-25 |
20140380338 | Method And Apparatus To Protect A Processor Against Excessive Power Usage - In an embodiment, a processor includes at least a first core. The first core includes execution logic to execute operations, and a first event counter to determine a first event count associated with events of a first type that have occurred since a start of a first defined interval. The first core also includes a second event counter to determine a second event count associated with events of a second type that have occurred since the start of the first defined interval, and stall logic to stall execution of operations including at least first operations associated with events of the first type, until the first defined interval is expired responsive to the first event count exceeding a first combination threshold concurrently with the second event count exceeding a second combination threshold. Other embodiments are described and claimed. | 2014-12-25 |
20140380339 | SYSTEM AND METHOD FOR OPTIMIZING USER NOTIFICATIONS FOR SMALL COMPUTER DEVICES - A system and method for notifying users in a manner that is appropriate for the event and the environment for the user. The method of the present invention relates to determining the desired properties of an event and assigning varying notification characteristics to that event. Profiles are created of the various events, wherein each profile relates to a different mode or situational environment, such as a meeting environment, an office or normal environment, a louder outside-type environment, etc. The invention further relates to placing the small computer device in a particular mode, either automatically or manually. Once in a particular mode the device provides notifications according to that mode. | 2014-12-25 |
20140380340 | Dependency Based Configuration Package Activation - An update platform is described that collectively handles driver and firmware updates for hardware resources of a computing device based on dependencies associated with the updates. The update platform may instantiate representations of each individual hardware resource as abstractions through which detection, analysis, acquisition, deployment, installation, and tracking of updates is managed. Using the representations, the update platform discovers available updates, matches configuration packages for the updates to appropriate resources, and initiates installation of the configuration packages. The update platform is further configured to recognize dependencies associated with the configuration packages. When dependencies are detected, corresponding configuration packages are marked to reflect the dependencies and activation is suspended until the dependencies are satisfied. Upon satisfaction of the dependencies, the dependencies are cleared and the configuration packages are activated. Configuration packages that are not associated with dependencies may be installed and activated “normally” at any time. | 2014-12-25 |
20140380341 | APPLICATION ACTIVATION USING DECOUPLED VERSIONING - Instead of an application specifying that it uses an entire API, the application specifies the subset(s) of the API that it uses. Specific hosts can choose when to implement a subset of the API set without having to support other subsets of the API. When the host implements a subset of API set that was not previously supported, an application that specified the use of the newly supported subset begins to work on the hosts automatically. An application may specify subsets having different versions. For example, the versions of different subsets that are specified may be different. When the host supports the subsets used by the application, the application is activated (i.e. “run”). When the host does not support one or more of the subsets used by the application, the application is not activated. | 2014-12-25 |
20140380342 | OPTICAL DISC APPARATUS, SHEET MEMBER, AND METHOD FOR CLEANING OBJECTIVE LENS - The present application discloses optical disc apparatus including: drive mechanism for rotating medium including processing surface to be subjected optical information process; housing for storing drive mechanism; tray mechanism for displacing medium between storage position, at which medium is stored in housing, and ejection position, at which medium is ejected from housing; at least one objective lens for condensing light onto processing surface of medium situated at storage position to perform information process; and first displacement mechanism for displacing at least one objective lens along processing surface between first position and second position which is more distant from rotational center of medium rotated by the drive mechanism than first position is. Tray mechanism defines at least one opening in position closer to second position than first position. | 2014-12-25 |
20140380343 | METHOD AND APPARATUS FOR DOWNLOADING MULTI-EPISODE CONTENT - A method for providing content to a viewer commences by first downloading, in response to viewer selection, at least one piece of content having multiple episodes, with each episode having a scheduled play out date. Thereafter, an actual play out date for each episode of the at least one piece of content is established based on viewer input such that the actual play out date for each episode does not occur earlier than the scheduled play out date. Lastly, the viewer will be billed upon the actual play out of each episode in an amount dependent on how long the actual play out date is delayed from the scheduled play out date. | 2014-12-25 |
20140380344 | Method and Apparatus for Program Information Exchange and Communications System - Relating to the field of communications, a method and an apparatus for program information exchange and a communications system provided in embodiments of the present invention can enable a user to, by using a user equipment, make a comment on a program currently being watched without adding an external input device. The method for program information exchange in the embodiment includes: obtaining a program comment instruction; extracting information about a current program from a pre-established database according to the program comment instruction; sending, if the information includes a program comment address, the program comment address to a user equipment. | 2014-12-25 |
20140380345 | PASSING CONTROL OF GESTURE-CONTROLLED APPARATUS FROM PERSON TO PERSON - A television (TV) includes a display and a processor controlling the display and receiving signals representing human gestures. The processor is programmed to respond to gestures from a first viewer to control the display. Also the processor is programmed to respond to gestures from a second viewer to control the display only responsive to a determination that the first viewer has both looked toward the second viewer, and that the first viewer has confirmed, as a separate act from looking toward the second viewer, a desire to transfer control of the TV to the second viewer. | 2014-12-25 |
20140380346 | SYSTEM AND METHOD OF CONTENT AND MERCHANDISE RECOMMENDATION - A method includes receiving, at a user device, first input corresponding to selection of a recommendation option of an electronic program guide. The method includes sending, from the user device to a display device, a first option to base a media content recommendation on a most recently displayed media content item and a second option to base the media content recommendation on a plurality of displayed media content items. The method includes receiving, at the user device, second input that corresponds to the first option or the second option. The method includes sending a request from the user device to a server. The request includes an identifier based on the second input. The method also includes receiving information corresponding to one or more recommended media content items at the user device in response to the request. | 2014-12-25 |
20140380347 | METHODS AND SYSTEMS FOR USER EXPERIENCE BASED CONTENT CONSUMPTION - Computer-implemented methods, systems, and computer readable media are disclosed for user experience based content consumption. The computer-implemented methods include, for example, receiving a request reflecting a content consumption of a content item. The computer-implemented methods may also include determining, using at least one processor, an alternative content consumption associated with an improved user experience than the requested content consumption as a function of historical experience data. In addition, the computer-implemented methods may also include outputting data associated with the alternative content consumption. | 2014-12-25 |
20140380348 | METHODS AND APPARATUS TO CHARACTERIZE HOUSEHOLDS WITH MEDIA METER DATA - Methods, apparatus, systems and articles of manufacture are disclosed to characterize households with media meter data. An example method includes identifying, with a processor, a target set of household categories associated with a target research geography, when a quantity of households within the target research geography representing the target set of household categories does not satisfy a threshold value, generating a first subset of categories and a second subset of categories from the target set of household categories, identifying a first set of households representing the first subset of categories from the target set of household categories and identifying an associated total number of household tuning minutes and a total number of household exposure minutes associated therewith, for each category in the second subset of categories from the target set of household categories, calculating a household tuning proportion and an exposure proportion, the household tuning proportion and exposure proportion based on the total number of household tuning minutes and exposure minutes, respectively, and calculating the panelist behavior probability based on the exposure proportion and the household tuning proportion. | 2014-12-25 |
20140380349 | METHODS AND APPARATUS TO CHARACTERIZE HOUSEHOLDS WITH MEDIA METER DATA - Methods, apparatus, systems and articles of manufacture are disclosed to characterize households with media meter data. An example method includes identifying, with a processor, a household minute credited to a station in a household by a first media meter (MM) device, the first MM device collecting only audio data, determining whether a panelist audience meter device is crediting the station at the same time in the household, when the panelist audience meter device is crediting the same station, associating the household minute with an ambient tuning status, identifying a first automatic gain control (AGC) value and a first code status of the household minute, calculating model coefficients based on the ambient tuning status, the first AGC value and the first code status, and calculating a probability of ambient tuning based on the model coefficients. | 2014-12-25 |
20140380350 | METHODS AND APPARATUS TO CHARACTERIZE HOUSEHOLDS WITH MEDIA METER DATA - Methods, apparatus, systems and articles of manufacture are disclosed to characterize households with media meter data. An example method includes identifying, with a processor, a power status and a first automatic gain control (AGC) value for an exposure minute from a panelist audience meter in a first household, the panelist audience meter comprising a power sensor, identifying a second AGC value and a daypart for a household tuning minute from a first media meter (MM) in the first household, the MM comprising microphones to collect audio data, and calculating model coefficients based on the exposure minute and the household tuning minute to be applied to data from a second MM in a second household, the model coefficients to facilitate a power status probability calculation in the second household devoid of the panelist audience meter having the power sensor. | 2014-12-25 |
20140380351 | RECEIVER SET, INFORMATION APPARATUS AND RECEIVING SYSTEM - An information apparatus connectable with plural external apparatuses including an audio visual data outputting portion, a command input portion which inputs a command from an external apparatus for requesting audio visual data, an information managing portion which manages a number of external apparatuses to audio visual data can be distributed to be simultaneously viewed or recorded and a controller portion which controls distribution of the data to the external apparatus which sent the command depending upon a distributing condition. The controller portion makes a decision on whether the number of external apparatuses to which data can be distributed is equal to or smaller than a limited number, and controls the distribution of data to the external apparatus based on a result of the decision. | 2014-12-25 |
20140380352 | Trick Play Seek Operation for HLS Converted from DTCP - A process to enable trick play operations is provided for HLS streaming video that has been converted by a system from DTCP. The system server provides a modified SEEK operation when an HLS GET message is received from an HLS client player. For the process, a DLNA header is provided from the HLS client player by including it in the HLS GET message. The HLS client also provides a DLNA RANGE REQUEST that requests a range of chunks making up a video desired and a seek point from where a seek operation is needed. The HLS server recognizes the DLNA header of the HLS GET message and DLNA RANGE REQUEST and obtains a range of chunks making up an extent of the recorded video using metadata fields. The server then generates a new HLS playlist with identification of the chunks and keytag corresponding to the seek operation. The server will provide chunks from the seek point and a rolling playlist to identify chunks and keytag from the seek point. | 2014-12-25 |
20140380353 | SECURE MULTIMEDIA TRANSFER SYSTEM - A method and apparatus for secure multimedia transfer provides an encrypted data transfer system that makes transferring multimedia content from a client to any incompatible system or to a system outside the location of the client very difficult. | 2014-12-25 |
20140380354 | SYSTEMS AND METHODS OF MEDIA CLIP SHARING WITH SPONSOR CARRYOVER - An exemplary method includes a computer-implemented media clip sharing system receiving, from an end-user of a media distribution service that distributes a media program, a request to share a clip of the media program to a social network and, in response to the request to share the clip of the media program to the social network, identifying a sponsor of the media program in the media distribution service and sharing the clip of the media program and data representative of the identified sponsor to the social network. Corresponding systems and methods are also described. | 2014-12-25 |
20140380355 | METHOD AND APPARATUS FOR INSERTING A VIRTUAL OBJECT IN A VIDEO - A method and an apparatus for inserting a virtual object in a video are described. The method utilizes a saliency map that characterizes the gaze allocation from a viewer on an image of the video and inserts the virtual object in the image of the video based on the saliency map. The method comprises: generating a saliency map of the image of the video after the insertion of the virtual object; and adjusting the insertion of the virtual object based on the saliency map by adjusting at least one visual characteristic of the inserted virtual object. | 2014-12-25 |
20140380356 | DEVICE AND METHOD FOR PROCESSING BI-DIRECTIONAL SERVICE RELATED TO BROADCAST PROGRAM - A broadcast receiver processing an interactive service related to a broadcast program according to one embodiment of the present invention includes a tuner configured to receive a trigger information including an information on operation timing of a TDO (triggered declarative object), wherein the trigger information includes at least one of a first URL information indicating a position of a signaling server configured to provide a TDO parameter table, a first trigger identification information identifying a trigger included in the TDO, a first time information setting a reference time for the trigger, and a second time information setting operation time of the trigger, a first network interface configured to access the signaling server using the first URL information and configured to receive a TDO parameter table signaling metadata information on at least one TDO in a specific segment from the signaling server, wherein the TDO parameter table includes a second URL information indicating a position of a content server configured to provide at least one file included in the TDO and a second identification information identifying a trigger included in the TDO, a second network interface configured to access the content server using the second URL information and configured to receive at least one file included in the TDO, a widget engine configured to operate the trigger, and a video processor configured to generate a video image including a TDO by the trigger operation and a broadcast program. | 2014-12-25 |
20140380357 | SERVER SIDE ADAPTIVE BIT RATE REPORTING - A server receives metadata associated with an advertisement in a transport signal stream from an encoder, the metadata identifying a specified frame of the transport signal stream corresponding to a point in time of the advertisement. The server instructs an encoder, by a server, to insert a marker into the specified frame of the transport signal stream, the marker identifying the point in time of the advertisement. The server receives data from a smart appliance. The server detects the marker in the data. The server identifies the marker as the specified frame of the transport signal stream played by the smart appliance. The server maps the marker to the identified point in time of the advertisement. | 2014-12-25 |
20140380358 | MENU PROMOTIONS USER INTERFACE - A system includes a processor and a memory coupled to the processor. The memory includes instructions that, when executed by the processor, cause the processor to perform operations including initiating display of a user interface that includes a plurality of menu items, where a first menu item of the plurality of menu items is associated with a first media content item of a plurality of media content items. The operations also include selectively enabling a movement operation based on a promotion being displayed at a display device. The promotion is associated with the first menu item and the movement operation moves a cursor position from a second menu item of the plurality of menu items to the first menu item. | 2014-12-25 |