Patent application number | Description | Published |
20090249480 | MINING USER BEHAVIOR DATA FOR IP ADDRESS SPACE INTELLIGENCE - The claimed subject matter is directed to mining user behavior data for increasing Internet Protocol (“IP”) space intelligence. Specifically, the claimed subject matter provides a method and system of mining user behavior within an IP address space and the application of the IP address space intelligence derived from the mined user behavior. | 10-01-2009 |
20090265786 | AUTOMATIC BOTNET SPAM SIGNATURE GENERATION - A framework may be used for generating URL signatures to identify botnet spam and membership. The framework may take a set of unlabeled emails as input that are grouped based on URLs contained within the emails. The framework may return a set of spam URL signatures and a list of corresponding botnet host IP addresses by analyzing the URLs within the emails that are contained within the groups. Each URL signature may be in the form of either a complete URL string or a URL regular expression. The signatures may be used to identify spam emails launched from botnets, while the knowledge of botnet host identities can help filter other spam emails also sent by them. | 10-22-2009 |
20100057647 | ACCOMMODATING LEARNED CLAUSES IN RECONFIGURABLE HARDWARE ACCELERATOR FOR BOOLEAN SATISFIABILITY SOLVER - A hardware accelerator is provided for Boolean constraint propagation (BCP) using field-programmable gate arrays (FPGAs) for use in solving the Boolean satisfiability problem (SAT). An inference engine may perform implications. Learned clauses may be generated during conflict analysis. Operations pertaining to learned clauses may include clause insertion and clause deletion (e.g., by invalidation) from a learned clause inference engine, and “garbage collection” in which unused or invalidated clauses may be removed from an inference engine. | 03-04-2010 |
20100095374 | GRAPH BASED BOT-USER DETECTION - Computer implemented methods are disclosed for detecting bot-user groups that send spam email over a web-based email service. Embodiments of the present system employ a two-prong approach to detecting bot-user groups. The first prong employs a historical-based approach for detecting anomalous changes in user account information, such as aggressive bot-user signups. The second prong of the present system entails constructing a large user-user relationship graph, which identifies bot-user sub-graphs through finding tightly connected subgraph components. | 04-15-2010 |
20100312877 | HOST ACCOUNTABILITY USING UNRELIABLE IDENTIFIERS - An IP (Internet Protocol) address is a directly observable identifier of host network traffic in the Internet and a host's IP address can dynamically change. Analysis of traffic (e.g., network activity or application request) logs may be performed and a host tracking graph may be generated that shows hosts and their bindings to IP addresses over time. A host tracking graph may be used to determine host accountability. To generate a host tracking graph, a host is represented. Host representations may be application-dependent. In an implementation, application-level identifiers (IDs) such as user email IDs, messenger login IDs, social network IDs, or cookies may be used. Each identifier may be associated with a human user. These unreliable IDs can be used to track the activity of the corresponding hosts. | 12-09-2010 |
20110208714 | LARGE SCALE SEARCH BOT DETECTION - A framework may be used for identifying low-rate search bot traffic within query logs by capturing groups of distributed, coordinated search bots. Search log data may be input to a history-based anomaly detection engine to determine if query-click pairs associated with a query are suspicious in view of historical query-click pairs for the query. Users associated with suspicious query-click pairs may be input to a matrix-based bot detection engine to determine correlations between queries submitted by the users. Those users indicating strong correlations may be categorized as bots, whereas those who do not may be categorized as part of flash crowd traffic. | 08-25-2011 |
20110283360 | IDENTIFYING MALICIOUS QUERIES - A framework identifies malicious queries contained in search logs to uncover relationships between the malicious queries and the potential attacks launched by attackers submitting the malicious queries. A small seed set of malicious queries may be used to identify an IP address in the search logs that submitted the malicious queries. The seed set may be expanded by examining all queries in the search logs submitted by the identified IP address. Regular expressions may be generated from the expanded set of queries and used for detecting yet new malicious queries. Upon identifying the malicious queries, the framework may be used to detect attacks on vulnerable websites, spamming attacks, and phishing attacks. | 11-17-2011 |
20120102169 | AUTOMATIC IDENTIFICATION OF TRAVEL AND NON-TRAVEL NETWORK ADDRESSES - A system to automatically classify types of IP addresses associated with a user. Information, such as user names, machine information, IP address, etc., may be obtained from logs. For each user or host in the logs, home IP addresses are identified from IP addresses where the user or host shows a predetermined level of activity. Travel IP addresses are identified, which are IP addresses at locations greater than a predetermined distance from the home IP addresses, as determined from geolocation data. A pattern analysis may be performed to determine which of the home IP addresses are work IP addresses associated with the user or host. The system may thus provide a classification of a user's or host's associated IP addresses as being one of travel, home, and work IP addresses. From this classification, mobility patterns may be derived, as well as applications to enhance security, advertising, search and network management. | 04-26-2012 |
20120246720 | USING SOCIAL GRAPHS TO COMBAT MALICIOUS ATTACKS - Detection of user accounts associated with spammer attacks may be performed by constructing a social graph of email users. Biggest connected components (BCC) of the social graph may be used to identify legitimate user accounts, as the majority of the users in the biggest connected components are legitimate users. BCC users may be used to identify more legitimate users. Using degree-based detection techniques and PageRank based detection techniques, the hijacked user accounts and spammer user accounts may be identified. The users' email sending and receiving behaviors may also be examined, and the subgraph structure may be used to detect stealthy attackers. From the social graph analysis, legitimate user accounts, malicious user accounts, and compromised user accounts can be identified. | 09-27-2012 |
20120304287 | AUTOMATIC DETECTION OF SEARCH RESULTS POISONING ATTACKS - Search result poisoning attacks may be automatically detected by identifying groups of suspicious uniform resource locators (URLs) containing multiple keywords and exhibiting patterns that deviate from other URLs in the same domain without crawling and evaluating the actual contents of each web page. Suspicious websites are identified and lexical features are extracted for each such website. The websites are clustered based on their lexical features, and group analysis is performed on each group to identify at least one suspicious group. Other implementations are directed to detecting a search engine optimization (SEO) attack by processing a large population of URLs to identify suspicious URLs based on the presence of a subset of keywords in each URL and the relative newness of each URL. | 11-29-2012 |
20130185791 | VOUCHING FOR USER ACCOUNT USING SOCIAL NETWORKING RELATIONSHIP - Trusted user accounts of an application provider are determined. Graphs, such as trees, are created with each node corresponding to a trusted account. Each of the nodes is associated with a vouching quota, or the nodes may share a vouching quota. Untrusted user accounts are determined. For each of these untrusted accounts, a trusted user account that has a social networking relationship is determined. If the node corresponding to the trusted user account has enough vouching quota to vouch for the untrusted user account, then the quota is debited, a node is added for the untrusted user account to the graph, and the untrusted user account is vouched for. If not, available vouching quota may be borrowed from other nodes in the graph. | 07-18-2013 |
20130188486 | DATA CENTER NETWORK USING CIRCUIT SWITCHING - A circuit-based digital communications network is provided for a large data center environment that utilizes circuit switching in lieu of packet switching in order to lower the cost of the network and to gain performance efficiencies. A method for transmitting data in such a network comprises sending a setup request for a path for transmitting the data to a destination node and then speculatively sending the data to the destination node before the setup request is completed. | 07-25-2013 |
20130339158 | DETERMINING LEGITIMATE AND MALICIOUS ADVERTISEMENTS USING ADVERTISING DELIVERY SEQUENCES - Known legitimate and malicious display advertisements are selected, and the ordered sequence of entities involved in the delivery of each display advertisement is observed and used to generate advertisement delivery sequences. The entities include the various servers, publishers, and advertising networks that are involved in the delivery of a display advertisement. Attributes of the entities in each sequence are determined and used to generate a set of rules that identify a display advertisement as legitimate or malicious based on the attributes of the advertising delivery sequence associated with the delivery of the display advertisement. The generated rules are used to identify possible malicious advertisements, and to identify one or more sources of malicious display advertisements. | 12-19-2013 |
20130347113 | DETERMINING POPULATED IP ADDRESSES - A service log of a service provider is analyzed to identify IP addresses used by account holders that are populated IP addresses. Existing information about legitimate and malicious accounts of the service provider is leveraged to determine likely good and bad populated IP addresses based on the accounts that use the populated IP addresses. Features of the good and bad populated IP addresses are used to train a classifier that can identify good and bad populated IP addresses based on features of the populated IP addresses. The classifier may be used to provide security services to the same service provider or different service providers. The services include identifying malicious accounts. | 12-26-2013 |
Patent application number | Description | Published |
20100146040 | System and Method for Content Validation - A method of obtaining content includes receiving a playfile. The playfile includes a chunk ID corresponding to a chunk of the content, a packet ID corresponding to a packet of the chunk, and a hash of the packet. The method further includes obtaining the chunk from a peer, determining a calculated hash for the packet, and discarding the chunk when the calculated hash does not match the hash in the playfile. | 06-10-2010 |
20110231661 | Content Distribution with Mutual Anonymity - A method for transferring content includes requesting the content from a serving peer and sending the content to a requesting peer. Requesting the content includes sending a request to a tracker, receiving a request token, a path identifier, and a first peer identifier from the tracker, and sending a request message to a second peer. The first peer identifier includes an identity of a first peer, and the request message includes the request token, the path identifier, and the first peer identifier. Sending the content includes receiving the request token and the path identifier from a third peer, sending a return message to a fourth peer, and transferring the content from the serving peer to the requesting peer through a transfer path. The return message includes the path identifier and a second peer identifier. The second peer identifier includes an identity of a fifth peer. The transfer path includes at least the second, fourth, and fifth peers. | 09-22-2011 |
20120096081 | System and Method for Content Validation - A method includes receiving at a directory server a notification from a client system, where the notification indicates that the first client received a corrupt packet of a playfile from a first peer. The method also includes determining if the first peer is a poor quality peer, updating a first peer score for the first peer if the first peer is not a poor quality peer, identifying a second peer that is not on a blacklist, and providing a peer identification associated with the second peer to the client system. | 04-19-2012 |
20130332533 | Content Distribution with Mutual Anonymity - A method for transferring content includes requesting the content from a serving peer and sending the content to a requesting peer. Requesting the content includes sending a request to a tracker, receiving a request token, a path identifier, and a first peer identifier from the tracker, and sending a request message to a second peer. The first peer identifier includes an identity of a first peer, and the request message includes the request token, the path identifier, and the first peer identifier. Sending the content includes receiving the request token and the path identifier from a third peer, sending a return message to a fourth peer, and transferring the content from the serving peer to the requesting peer through a transfer path. The return message includes the path identifier and a second peer identifier. The second peer identifier includes an identity of a fifth peer. The transfer path includes at least the second, fourth, and fifth peers. | 12-12-2013 |
20140359682 | SYSTEM AND METHOD FOR CONTENT VALIDATION - A method includes receiving at a directory server a notification from a client system, where the notification indicates that the first client received a corrupt packet of a playfile from a first peer. The method also includes determining if the first peer is a poor quality peer, updating a first peer score for the first peer if the first peer is not a poor quality peer, identifying a second peer that is not on a blacklist, and providing a peer identification associated with the second peer to the client system. | 12-04-2014 |