Next Article in Journal
Constrained Eigenvalue Minimization of Incomplete Pairwise Comparison Matrices by Nelder-Mead Algorithm
Next Article in Special Issue
Similar Supergraph Search Based on Graph Edit Distance
Previous Article in Journal
Time-Dependent Alternative Route Planning: Theory and Practice
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interactive Graph Stream Analytics in Arkouda

Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA
*
Authors to whom correspondence should be addressed.
Algorithms 2021, 14(8), 221; https://doi.org/10.3390/a14080221
Submission received: 1 April 2021 / Revised: 3 July 2021 / Accepted: 12 July 2021 / Published: 21 July 2021
(This article belongs to the Special Issue Scalable Graph Algorithms and Applications)

Abstract

:
Data from emerging applications, such as cybersecurity and social networking, can be abstracted as graphs whose edges are updated sequentially in the form of a stream. The challenging problem of interactive graph stream analytics is the quick response of the queries on terabyte and beyond graph stream data from end users. In this paper, a succinct and efficient double index data structure is designed to build the sketch of a graph stream to meet general queries. A single pass stream model, which includes general sketch building, distributed sketch based analysis algorithms and regression based approximation solution generation, is developed, and a typical graph algorithm—triangle counting—is implemented to evaluate the proposed method. Experimental results on power law and normal distribution graph streams show that our method can generate accurate results (mean relative error less than 4%) with a high performance. All our methods and code have been implemented in an open source framework, Arkouda, and are available from our GitHub repository, Bader-Research. This work provides the large and rapidly growing Python community with a powerful way to handle terabyte and beyond graph stream data using their laptops.

1. Introduction

More and more emerging applications, such as social networks, cybersecurity and bioinformatics, have data that often come in the form of real-time graph streams [1]. Over its lifetime, the sheer volume of a stream could be petabytes, or more like network traffic analysis in the IPv6 network, which has 2 128 nodes. These applications motivate the challenging problem of designing succinct data structures and real-time algorithms to return approximate solutions. Different stream models [2,3,4,5,6,7], which allow access to the data in only one pass (or multi-pass in semi-streaming algorithms), are developed to answer different queries using a significantly reduced sublinear space. The accuracy of such models’ approximate solutions is often guaranteed by a pair of user specified parameters, ϵ > 0 and 0 < δ < 1 . This means that the ( 1 + ϵ ) approximation ratio can be achieved with a probability of ( 1 δ ) . This literature provides the theoretical foundation to solve the challenging large scale stream data problem.
However, ignoring the hidden constants in the theoretical space and time complexity analysis can cause unacceptable performance problems in real-world applications. When different applications, such as from social, economic and natural domains, have to handle real-time large data streams as routine work, the practical performance for real-world applications becomes an even more important and urgent requirement.
At the same time, graph streams need exploratory data analysis (EDA) [8,9,10]. Instead of checking results, given a hypothesis, with data, EDA is primarily for seeing what the data can tell us beyond the formal modeling or hypotheses testing task. In this way, EDA tries to maximize the value of data. Popular EDA methods and tools, which often run on laptops or common personal computers, cannot hold terabyte or even larger graph datasets, let alone produce highly efficient analysis results. Arkouda [11,12] is an EDA framework under early development that brings together the productivity of Python with world-class high-performance computing. If a graph stream algorithm can be integrated into Arkouda, it means that data scientists can take advantage of both laptop computing and cloud/supercomputing to perform interactive data analysis at scale.
In this work, we propose a single pass stream model and develop a typical algorithm for interactive graph stream analytics that can meet practical performance requirements with high accuracy. Furthermore, the proposed model and algorithm will be integrated into Arkouda to support the Python community to handle graph streams efficiently. The major contributions of this paper are as follows:
  • A succinct Double-Index (DI) data structure is designed to build the sketch of a large graph stream. The DI data structure can support an edge based sparse graph partition to achieve load balance and to achieve O ( 1 ) time complexity in searching the incidence vertices of a given edge or the adjacency list of a given vertex.
  • A single pass regression analysis stream model is developed to support general graph streams analysis. Our model can map a large graph stream into a much smaller user specified working space to build a general graph stream sketch. Our method can improve the query quality by dividing the sketch into multiple partitions. A general regression method is proposed to generate the approximate solution based on the partial query results from different partitions.
  • A typical graph algorithm—triangle counting [13]—is developed to evaluate the efficiency of the proposed stream regression model. Experimental results using two typical distributions—power law and normal distributions—show that our method can achieve a high accuracy with a very low (4%) mean relative error.
  • All the proposed methods have been integrated into an open-source framework, Arkouda, to evaluate their practical end-to-end performance. This work can help the large and popular data analytics community that exists within Python to conduct interactive graph stream analytics easily and efficiently on terabytes, and beyond, of graph stream data.

2. Related Work

There are two different methods for handling graph streams. One is the exact method and the other is the approximate method. The exact method needs very large memory to completely hold the graph stream. This is often not feasible for hardware resource limited systems. The approximate method will use very limited memory (sublinear space) to handle the graph streams but can only provide approximate solutions. In this section, we will introduce the related approximate methods, exact methods and different triangle counting algorithms.

2.1. Graph Stream Sketch

A much smaller sketch allows for many queries over the large graph stream to obtain approximate results efficiently. Therefore, how to build a graph stream’s sketch is of fundamental importance for graph stream analytics. There are several methods that build the sketch by treating each stream element independently without keeping the relationships among those elements. For example, CountMin [4] allows fundamental queries in data stream summarization such as point, range, and inner product queries. It can also be used to find quantiles and frequent items in a data stream. Bottom K sketch [5] places both priority sampling and weighted sampling, without replacement, within a unified framework. It can generate tighter bounds when the total weight is available. gSketch [6] introduces the sketch partitioning technique to estimate and optimize the responses to basic queries on graph streams. By exploiting both data and query workload samples, gSketch can achieve better query estimation accuracy than that achieved with only the data sample. In fact, we borrow the sketch partitioning idea in our implementation of the Double-Index (DI) sketch. However, these existing sketches focus on ad-hoc problems (they can only solve the proposed specific problems instead of general problems), so they cannot support general data analytics over graph streams.
TCM sketch [7] uses a graphical model to build the sketch of a graph stream. This means that TCM is a general sketch that allows complicated queries. However, TCM’s focus is setting up a general sketch theoretically, instead of optimizing the practical performance for real-world datasets. Our Double-Index (DI) sketch is especially designed for real-world sparse graph streams and it can achieve a high practical performance.

2.2. Complete Graph Stream Processing Method

Several dynamic graph management and analytics solutions are as follows. Aspen [14] takes advantage of purely functional trees data structure, C-trees, to implement quick graph updates and queries. LLAMA’s streaming graph data structure [15] is motivated by the CSR format. However, like Aspen, LLAMA is designed for batch processing in the single-writer multi-reader setting and does not provide graph mutation support. GraphOne [16] can run queries on the most recent version of the graph while processing updates concurrently by using a combination of an adjacency list and an edge list.
Systems such as LLAMA, Aspen and GraphOne focus on designing efficient dynamic graph data structures, and their processing units do not support incremental computation. KickStarter [17] maintains the dependency information in the form of dependency trees, and performs an incremental trimming process that adjusts the values based on monotonic relationships. GraphBolt [18] incrementally processes streaming graphs and minimizes redundant computations upon graph mutations while still guaranteeing synchronous processing semantics. DZiG [19] is a high-performance streaming graph processing system that retains efficiency in the presence of sparse computations while still guaranteeing BSP semantics. However, all such methods have the challenging requirement that the memory must be large enough to hold all the streams. For very large streams, this is often not feasible.

2.3. Triangle Counting Algorithm

Triangle counting is a key building block for important graph algorithms such as transitivity and K-truss. There are two kinds of triangle counting algorithms, exact triangle counting for static graphs and approximate triangle counting for graphs built from dynamic streams [1]. Since the proposed DI sketch is a graphical model, all the static triangle counting algorithms can also be used on the small DI sketch.
Graph Challenge https://graphchallenge.mit.edu/challenges (accessed on 20 July 2021) is an important competition for large graph algorithms (including triangle counting). Starting from 2017, many excellent static triangle counting algorithms have been developed. They target three hardware platforms: shared memory, distributed memory and GPU.
The shared memory methods take advantage of some fast parallel packages, such as Kokkos Kernels [20] or Cilk [21], to improve their performance. However, the GPU methods [22,23,24] use massively parallel fine-grain hardware threads of a GPU to improve the performance. Distributed memory triangle counting focuses on very large graphs that cannot fit in a single node’s memory. Some heuristics [25], optimized communication libraries [26] and graph structures [27] are used to improve the performance. We leverage the ideas in such methods to develop our Chapel-based multi-locale triangle counting algorithm.
The basic idea of graph stream analysis is to estimate the exact query result of a graph stream based on the sampling results. Colorful triangle counting [28] is an example. However, it needs to know the number of triangles and the maximum number of triangles of an edge to set the possibility value. This is not feasible in practical applications. Reduction-based [29] triangle counting is a typical method, which can design a theoretical algorithm based on user specified values ( ϵ , δ ) . Such a method often cannot directly be used satisfactorily in practical applications because hidden constant values often impact the performance. Neighborhood sampling [30] is another method for triangle counting with significant space and time complexity improvements. Specifically, Braverman et al. [31] discuss the difficulty of the triangle counting algorithm in a streaming model. Other sampling methods, such as [32], have space usage that depends on the ratio of the number of triangles and the number of triples, or the algorithm will require the edge stream to meet a specific order. Jha et al. [33] apply the birthday paradox theory to sampling data to estimate the number of triangles in a graph stream.

3. Arkouda Framework

Arkouda is an open-source framework that aims to support flexible, productive, and high performance large scale data analytics. The basic building components of Arkouda include three parts: the front-end (Python [34]), the back-end (Chapel [35]) and a middle, communicative part (ZeroMQ [36]). Python is the interface for end users to utilize the different functions from Arkouda. All large data analysis and high performance computing (HPC) tasks are executed at the back-end (server) without users needing to know the algorithmic implementation details. ZeroMQ is used for the data and instruction exchanges between Python users and the Chapel back-end services.
In order to enable exploratory data analysis on large scale datasets in Python, Arkouda divides its data into two physical sections. The first section is the metadata, which only include attribute information and occupy very little memory space. The second section is the raw data, which include the actual big datasets to be handled by the back-end. However, from the view of the Python programmers, all data are directly available just as on their local laptop device. This is why Arkouda can break the limit of local memory capacity, while at the same time bringing traditional laptop users powerful computing capabilities that could previously only be provided by supercomputers.
When users are exploring their data, if only the metadata section is needed, then the operations can be completed locally and quickly. These actions are carried out just as in previous Python data processing workflows. If the operations have to be executed on raw data, the Python program will automatically generate an internal message and send the message to Arkouda’s message processing pipeline for external and remote help. Arkouda’s message processing center (ZeroMQ) is responsible for exchanging messages between its front-end and back-end. When the Chapel back-end receives the operation command from the front-end, it will execute the analysis task quickly on the powerful HPC resources and large memory to handle the corresponding raw data and return the required information back to the front-end. Through this, Arkouda can support Python users to locally handle large scale datasets residing on powerful back-end servers without knowing all the detailed operations at the back-end.
Figure 1 is a schematic diagram visualizing how Arkouda can bridge the popularity of Python and the power of HPC together to enable EDA at scale. In this figure, from the conceptual level view, Arkouda has three loosely coupled components. The different components can be designed and implemented independently as long as they follow the same information exchanging interface. Although Arkouda is implemented using Chapel, ZeroMQ and Python, the basic concepts can be implemented with different languages and packages. From the view of data, the data are classified into two categories, metadata and raw data. Correspondingly, we simply refer to the operations that can be done locally and quickly as light operations. The operations that need help from the remote server or data center are referred to as heavy operations. Arkouda can automatically organize the data and the corresponding operations to provide a better performance and user experience.
In this paper, we discuss the integration of our graph stream sketch, model and algorithm into Arkouda to enable popular and high level Python users to conduct interactive graph stream analytics.

4. Succinct Data Structure

4.1. Edge Index and Vertex Index

We propose a Double-Index (DI) data structure to support quick search from a given edge to its incident vertices or from a given vertex to its adjacency list. The two index arrays are called the edge index array and the vertex index array. Furthermore, our DI data structure requires a significantly smaller memory footprint for sparse graphs.
The edge index array consists of two arrays with the same shape. One is the source vertex array and the other is the destination vertex array. If there are a total of M edges and N vertices, we will use the integers from 0 to M 1 to identify different edges and the integers from 0 to N 1 to identify different vertices. For example, given edge e = i , j , we will let S R C [ e ] = i and D S T [ e ] = j , where S R C is the source vertex array and D S T is the destination vertex array; e is the edge ID number. Both S R C and D S T have the same size M. When all edges are stored in S R C and D S T , we will sort them based on their combined vertex ID value ( S R C [ e ] , D S T [ e ] ) , and remap the edge ID from 0 to M 1 . Based on the sorted edge index array, we can build the vertex index array, which also consists of two of the same shape arrays. For example, in Figure 2, we let edge e 1000 have ID 1000. If e 1000 = 50 , 3 , e 1001 = 50 , 70 and e 1002 = 50 , 110 are all the edges starting from vertex 50 (a directed graph). Then we will let one vertex index array S T R [ 50 ] = 1000 and another vertex index array N E I [ 50 ] = 3 . This means that, for given vertex ID 50, the edges starting with vertex 50 are stored in the edge index array starting at position 1000 and there is a total of 3 such edges. If there are no edges from vertex i, we will let S T R [ i ] = 1 and N E I [ i ] = 0 . In this way, we can directly search the neighbors or adjacency list of any given vertex.
Our DI data structure can also support graph weights. If each vertex has a different weight, we use an array, V _ W E I , to express the weight values. If each edge has a weight, we use an array, E _ W E I , to store the different weights.

4.2. Time and Space Complexity Analysis

For a given array A, we use A [ i . . j ] to express the elements in A from A [ i ] to A [ j ] . A [ i . . j ] is also called an array section of A. So, for a given vertex with index i, it will have N E I [ i ] neighbors and their vertex IDs are from D S T [ S T R [ i ] ] to D S T [ S T R [ i ] + N E I [ i ] 1 ] . This can be expressed as an array section D S T [ S T R [ i ] . . S T R [ i ] + N E I [ i ] 1 ] (here we assume the out degree of i is not 0). For given vertex i, the adjacency list of vertex i can be easily expressed as i , x , where x in D S T [ S T R [ i ] . . S T R [ i ] + N E I [ i ] 1 ] . Based on the N E I and S T R vertex index arrays, we can find the neighbor vertices or the adjacency list in O ( 1 ) time complexity. For given edge e = i , j , it will be easy for us to find its incident vertices i in array S R C [ e ] and j in array D S T [ e ] and also in O ( 1 ) time complexity. Regarding the storage space, if the graph has M edges and N vertices, we will need 2 M memory to store all the edges. Compared with the dense matrix data structure, which needs N 2 memory to store all the edges, this is much smaller. To improve the adjacency list search performance, we use 2 N memory to store the N E I and S T R arrays.
Figure 2 shows M sorted edges represented by the S R C and D S T arrays. Any one of the N vertices n k can find its neighbors using N E I and S T R arrays with O ( 1 ) time complexity. For example, given edge i , j , if vertex j’s starting index in S R C is 1000, it has three adjacency edges, then such edges can be found starting from index position 1000 in arrays S R C and D S T using N E I and S T R arrays directly. This figure shows how the N E I and S T R arrays can help us locate neighbors and adjacency lists quickly.
For an undirected graph, an edge i , j means that we can also arrive at i from j. We may use the data structures S R C , D S T , S T R , N E I to search the neighbors of j in SRC. However, this search cannot be performed in O ( 1 ) time complexity. To improve the search performance, we introduce another four arrays, called reversed arrays, S R C r , D S T r , S T R r , N E I r . For any edge i , j that has its i vertex in S R C and j vertex in D S T , we will have the corresponding reverse edge j , i in S R C r and D S T r , where S R C r has exactly the same elements as in D S T , and D S T r has exactly the same elements as in S R C . S R C r and D S T r are also sorted, and N E I r and S T R r are the array of the number of neighbors and the array of the starting neighbor index just like the directed graph. So, for a given vertex i of an undirected graph, the neighbors of vertex i will include the elements in D S T [ S T R [ i ] . . S T R [ i ] + N E I [ i ] 1 ] and the elements in D S T r [ S T R r [ i ] . . S T R r [ i ] + N E I r [ i ] 1 ] . The adjacency list of the vertex i should be i , x , where x in D S T [ S T R [ i ] . . S T R [ i ] + N E I [ i ] 1 ] or i , x , where x in D S T r [ S T R r [ i ] . . S T R r [ i ] + N E I r [ i ] 1 ] .
Given a directed graph with M edges and N vertices, our data structure will need 2 ( M + N ) integer (64 bits) storage or M + N 4 bytes. For an undirected graph, we will need twice the storage of a directed graph. For weighted vertices and weighted edges, additional N, M integer storage will be needed, respectively.

4.3. Edge Oriented Sparse Graph Partition

For real-world power law [37,38,39] graphs, the edge and vertex distributions are highly skewed. Few vertices will have very large degrees but many vertices have very small degrees. If we partition the graph evenly based on the vertices, it will be easy to cause a load balancing problem because the processor that holds the vertices that have a large number of edges will often have a very heavy load. So, we equally divide the total number of edges into different computing nodes instead.
Figure 3 shows the basic idea of our sparse graph partition method. The edge arrays S R C and D S T will be distributed by BLOCK onto different processors to make sure most of the processors will have the same load. When we assign an edge’s vertex entry in index arrays N E I and S T R to the same processors, this approach can increase the locality when we search from edge to vertex or from vertex to edge. However, this requires us to distribute N E I and S T R in an irregular way since the number of elements assigned to different processors may be very different. In our current implementation, we just partition N E I and S T R arrays evenly as the edge arrays.

4.4. Comparison with CSR

The compressed sparse row (CSR) or compressed row storage (CRS) or Yale format has been widely used to represent a sparse matrix with a much smaller space. Our double-index data structure has some similarity with CSR. The value array of CSR is just like the edge weight array in DI; the column array of CSR is the same as the D S T array in DI; the row array of CSR is very close to the S T R in DI. CSR is a vertex oriented data structure and it can support quick search from any vertex to its adjacency list. DI has the same function as CSR.
The major difference between DI and CSR is that DI provides the explicit mapping between an edge ID to its incident vertices, but CSR does not. This difference has two effects: (1) DI can support another kind of quick search from any edge ID to its incident vertices; however, CSR cannot, because SCR does not provide the source vertex of a given edge ID; (2) DI can support edge oriented graph partition (see Section 4.3) based on the explicit edge index arrays to achieve load balance; however, CSR cannot.
Another difference is that in DI we use an array N E I to explicitly represent the number of neighbors of a given vertex to remove the ambiguous meaning of the row index array in CSR. N E I [ v ] can be replaced by S T R [ v + 1 ] S T R [ v ] if we extend one element in the S T R array and use the multiple meanings of the row index array in CSR. In our DI data structure, the meaning of S T R [ v ] is not ambiguous. It is the starting position of vertex v in the edge index array or S T R [ v ] = 1 if v has no edge. However, in CSR, if v has no edges, the value of S T R [ v ] is not −1 and it is the starting position of the next non zero element after v or the total number of non zero elements. So, for any case in which we need to make sure S T R [ v ] is really the starting position of v, we must execute an additional check because the value of S T R [ v ] itself cannot provide such information. When we use the DI data structure to express a much smaller graph sketch, the number of the total vertices N is much smaller than before. So, in DI we use an additional N size array N E I (much smaller than the original graph) to ensure the parallel operations on S T R have clear semantics and are easy to understand.

5. Single Pass Regression Analysis Stream Model

Our stream model has the following features: (1) Only one pass is needed to build the graph sketch; (2) A parameter, the Shrinking Factor, is introduced to allow users to directly control the size of a sketch. So, the users’ requirement can be used to build a sketch with much less space; (3) The graph stream sketch is divided into three different partitions (we also refer to the three independent small graphs as sub-sketches) and this method can help us to avoid global heterogeneity and skewness but take advantage of the local similarity; (4) Our sketch is expressed as a graph so the exact graph analysis method on a complete graph can also be used on our different sub-sketches; (5) A general multivariate linear regression model is used to generate the approximate solution based on the results from different sub-sketches. The regression method can be employed to support general graph analysis methods.
The framework of our stream model includes three parts: (1) Mapping a graph stream into a much smaller graph sketch in one pass. The sketch can have multiple independent partitions or sub-sketches; (2) Developing an exact algorithm on different sub-sketches to generate partial solutions; (3) Developing a regression model to generate an approximate solution based on different partial solutions.

5.1. Building the Sketch as a Multi-Partition Graph

5.1.1. Sliding Window Stream

We first describe our method when the stream comes from a sliding window, which means that only the last M edges falling into the given time window will be considered.
We define the shrinking factor S F (a positive integer) to constrain the space allocation of a stream sketch. This means that the size of the edges and vertices of the graph sketch will be 1 S F of the original graph. The existing research on sketch building [7] shows that multiple pairwise independent hash function [4] methods can often generate better estimation results because they can reduce the probability of hash collisions. In our solution, we sample at different parts of the given sliding window instead of the complete stream and then map the sampled edges into multiple (here three) independent partitions to completely avoid collisions. We name the three partitions Head, Mid, and Tail partitions or sub-sketches, respectively. Each partition will be assigned with 1 3 S F space of the original graph.
Since we divide the sketch into three independent partitions, we select a very simple hash function. For a given sliding window, we let the total number of edges and vertices in this window be M and N. Then, the size of each partition will have P a r t i t i o n M = M 3 S F edges and P a r t i t i o n N = N 3 S F vertices. For any edge e = i , j from the original graph, we will map the edge ID e to m o d ( e , P a r t i t i o n M ) . At the same time, its two vertices i and j will be mapped to m o d ( i , P a r t i t i o n N ) and m o d ( j , P a r t i t i o n N ) . If e < M 3 , then we will map e = i , j to the Head partition. If e 2 M 3 , we will map e = i , j to the Tail partition. Otherwise, we will map e = i , j to the Mid partition.
Figure 4 is an example that shows how we map, completely, 3,000,000 edges in a given sliding window into a sketch with three partitions. The edges from 1,000,000 to 1999,999 are mapped to the Head partition p0 that has 1000 edges. The edges from 2,000,000 to 2,999,999 are mapped to the Mid partition p1 and the edges from 3,000,000 to 3,999,999 are mapped to the Tail partition p2. Each partition is expressed as a graph.
After the three smaller partitions are built, we sort the edges and create the DI data structure to store the partition graphs.
We map different parts of a stream into corresponding sub-sketches and this is very different from the existing sketch building methods. They map the same stream into different sketches with independent pairwise hash functions so each sketch is an independent summarization of the stream. However, in our method, one sub-sketch can only stand for a part of the stream and we use different sub-sketches to capture the different local features in a stream. Our regression model (see Section 5.3) will be responsible for generating the final approximate solution.
More partitions will help to capture more local features of a stream. However, we aim for a regression model that is as simple as possible. Too many partitions will make our regression model have more variables and become complicated. We select three as the default number of partitions because this can achieve sufficient accuracy based on our experimental results. Of course, for some special cases, we may choose more partitions but use the same proposed framework.

5.1.2. Insert-Only and Insert-Delete Streams

For the sliding window stream, we only need to care about the edges in the given window. However, for insert-only and insert-delete streams, we need to consider the edges that will come in the future. Here, we will describe the sketch updating mechanism in our stream model.
For the insert-delete stream, we will introduce a new weight array in the sketch. If the total number of edges that can be held in one partition is P a r t i t i o n M , then we will introduce a weight array E _ W E I with size P a r t i t i o n M to store the latest updated information of each edge. The initial value of E _ W E I will be zero. If a new insert edge is mapped to e, then we will let E _ W E I [ e ] = E _ W E I [ e ] + 1 . If a deleted edge is mapped to e, then we will let E _ W E I [ e ] = E _ W E I [ e ] 1 . So, for any edge in the partition, if E _ W E I [ e ] = 0 , it means that the edge has been deleted so we can safely remove this edge from our sketch. If E _ W E I [ e ] > 1 , it means that there are multiple edges in the stream that have been mapped to this edge. For the insert-only stream, it is similar to the insert-delete stream but we never reduce the value of E _ W E I .
To improve the sketch update performance, we will use the bulk updating mechanism for the two streams. If B new edges have arrived (they can be insert edges or delete edges), we will divide B into three parts and map them into the three different partitions just as in the method used in the sliding window stream. After the new edges are updated, we need to resort the edges and update the DI data structure to express the updated sub-sketches.

5.2. Edge–Vertex Iterator Based Triangle Counting Algorithm

We can directly employ existing exact graph algorithms in our sketch because the sketch is also expressed as a graph. Here, we will use a typical graph analysis algorithm—triangle counting—to show how we can develop optimized exact algorithms based on our D I data structure.
To improve the performance of a distributed triangle counting algorithm, two important problems are maintaining load balancing and avoiding remote access or communication as much as possible.
In Chapel, the locale type refers to a unit of the machine resources on which the program is running. Locales have the capability to store variables and to run Chapel tasks. In practice, for a standard distributed memory architecture, a single multicore/SMP node is typically considered a locale. Shared memory architectures are typically considered a single locale. In this work, we will develop a multiple-locale exact triangle counting algorithm for distributed memory clusters.
For power law graphs, a small number of vertices may have a high degree. So, if we divide the data based on number of vertices, it is easy to cause an unbalanced load. Our method divides the data based on the number of edges. At the same time, our DI data structure will keep the edges connected with the same vertex together. So, the edge partition method will result in good load balancing and data access locality.
However, if each locale directly employs the existing edge iterator [13] on its edges, the reverse edges of the undirected graphs are often not in the same locale. This will cause a new load balancing problem. So, we will first generate the vertices based on the edges distributed to different locales. Then, each locale will employ the vertex iterator to calculate the number of triangles. The combined edge–vertex iterator method is the major innovation for our triangle counting method on distributed systems.
When we employ the high level parallel language Chapel to design the parallel exact triangle counting algorithm, there are two additional advantages: (1) Our DI data structure can work together with the coforall or forall parallel construct of Chapel to exploit the parallelism; (2) We can take advantage of the high level module Set provided by Chapel to implement the parallel set operation easily and efficiently.
At a high level, our proposed algorithm takes advantage of the multi-locale feature of Chapel to handle very large graphs in distributed memory. The distributed data are processed at their locales or their local memory. Furthermore, each shared memory compute node can also process their own data in parallel. The following steps are needed to implement the multi-locale exact triangle counting algorithm: (1) The DI graph data should be distributed evenly onto the distributed memory to balance the load; (2) Each distributed node only counts the triangle including the vertices assigned to the current node; (3) All the triangles calculated by different nodes should be summed together to obtain the exact number of triangles.
Our multi-locale exact triangle counting algorithm is described in Algorithm 1. For a given graph sketch partition G s k = E s k , V s k , we will use an array s u b T r i S u m to keep each locale’s result (line 2). Here in line 3, we use coforall instead of forall to allow each l o c in L o c a l e s to execute the following code in parallel so we can fully exploit the distributed computing resources. The code between line 3 and line 17 will be executed in parallel on each locale. Each locale will use a local variable t r i C o u n t to store the number of triangles (line 5). Line 6 and line 7 are important for implementing load balancing. Assume edges from e s t a r t to e e n d are assigned to the current locale, we can obtain the corresponding vertex ID S t a r t V e r = S R C [ e s t a r t ] and E n d V e r = S R C [ e e n d ] as the vertices interval that the current locale will handle. Since different locales may have different edges with the same starting vertex, the interval [ S t a r t V e r . . E n d V e r ] of different locales may overlap. At the same time, some starting vertex in the S R C r index array may not appear in S R C , so we should also make sure there is no “hole” in the complete interval [ 0 . . | V s k | 1 ] . In line 7, we will make sure all the intervals will cover all the vertices without overlapping.
Our method includes the following steps: (1) If the current locale’s S t a r t V e r is the same as the previous locale’s E n d V e r , this means that one vertex’s edges have been partitioned into two different locales. We will set S t a r t V e r = S t a r t V e r + 1 to avoid two locales executing the counting on the same vertex; (2) If the current locale’s S t a r t V e r is different from the previous locale’s E n d V e r , and the difference is larger than 1, this means that there is a “hole” between the last locale’s vertices and the current locale’s vertices. So we will let the current locale’s S t a r t V e r = last locale’s E n d V e r + 1 ; (3) If the current locale is the last locale, we will let its E n d V e r = the last vertex ID. If the current locale is the first locale, we will let S t a r t V e r = 0 .
From line 8 to line 14 we will count all the triangles starting from the vertices assigned to the current locale in parallel. In line 9 we will generate all the adjacent vertices u a d j of the current vertex u and its vertex ID is larger than u. From line 10 to line 12, for any vertex v u a d j , we will generate all the adjacent vertices v a d j of current vertex v and its vertex ID is larger than v. So the number of vertices in u a d j v a d j is the number of triangles having edge u , v . Since we only calculate the triangles whose vertices meet u < v < w , we will not count the duplicate triangles. In this way, we can avoid the unnecessary calculation. In line 15, each locale will assign its total number of triangles to the corresponding position of array s u b T r i s u m . At the end in line 18, when we sum all the number of triangles of different locales, we will obtain the total number of triangles.
Algorithm 1: Edge–vertex Iterator triangle counting algorithm
Algorithms 14 00221 i001

5.3. Real-World Graph Distribution Based Regression Model

Instead of developing different specific methods for different graph analysis problems, we propose a general regression method to generate the approximate solution based on the results of different sub-sketches.
One essential objective of a stream model is generating an accurate estimation based on the query results from its sketch. The basic idea of our method is to exploit the real-world graph distribution information to build an accurate regression model. Then we can use the regression model to generate approximate solutions for different queries.
Specifically, for the triangle counting problem, when we know the exact number of triangles in each sub-sketch (see Table 1), the key problem is how we can estimate the total number of triangles.
To achieve user-required accuracy for different real-world applications, our method exploits the features of real-world graph distributions. Many sparse networks—social networks in particular—have an important property; their degree distribution follows the power law distribution. Normal distributions are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. So, we develop two regression models for the two different graph degree distributions.
We let E H , E M , E T be the exact number of triangles in the Head, Mid and Tail sub-sketches. The exact triangle counting algorithm is given in Section 5.2.
For normal degree distribution graphs, we assume that the total number of triangles has a linear relationship with the number of triangles in each sub-sketch. We take E H , E M , E T as three randomly sampled results of the original stream graph to build our multivariate linear regression model. T C N o r m a l is the estimated number of triangles and the regression model is given in Equation (1).
T C N o r m a l = a × E H + b × E M + c × E T .
For power law graphs, the sampling results of E H , E M , E T can be significantly different because of the skewness in degree distribution. A power law distribution has the property that its log–log plot is a straight line. So we assume the logarithmic value of the total number of triangles in a stream graph should have a linear relationship with the logarithmic values of the exact number of triangles in different degree sub-sketches. Then we have two ways to build the regression model. One is unordered and the other is ordered. The unordered model can be expressed as in Equation (2). In this model, the relative order information of sampling results cannot be captured.
T C p o w e r l a w , l o g = a × E H , l o g + b × E M , l o g + c × E T , l o g ,
where T C p o w e r l a w , l o g = l o g ( T C p o w e r l a w ) is the logarithmic value of the estimated value T C p o w e r l a w ; E H , l o g = l o g ( E H ) , E M , l o g = l o g ( E M ) , and E T , l o g = l o g ( E T ) ; l o g ( E H ) , l o g ( E M ) , and l o g ( E T ) are the logarithmic values of E H , E M , and E T , respectively.
Then, we refine the regression model for power law distribution as follows. We get E m i n , E m e d i a n and E m a x by sorting E H , E M , and E T . They mean the minimum, median and maximum sampling values of the number of triangles. They will be used to stand for the sampling results of a power law distribution at the right (low possibility), middle (a little bit high possibility) and left part (very high possibility). We let E m i n , l o g = l o g ( E m i n ) , E m e d i a n , l o g = l o g ( E m e d i a n ) , and E m a x , l o g = l o g ( E m a x ) . Our ordered multivariate linear regression model for power law graphs is given in Equation (3).
T C p o w e r l a w , l o g = a × E m i n , l o g + b × E m e d i a n , l o g + c × E m a x , l o g .
The accuracy of our multivariate linear regression model is given in in Section 7.3.1 and we can see that the ordered regression model is better than the unordered regression model. Both of our normal and ordered power law regression models achieve a very high accuracy.

6. Integration with Arkouda

Based on the current Arkouda framework, the communication mechanism can be directly reused between Python and Chapel. To support large graph analysis in Arkouda, the major work is implementing the graph data structure and the corresponding analysis functions in both Python and the Chapel package. In this section, we will introduce how we define the graph classes to implement our DI data structure and develop a corresponding Python benchmark to drive the triangle counting method.

6.1. Graph Classes Definition in Python and Chapel

Our DI data structure can also support graph weight. If each vertex has a different weight, we use an array V _ W E I to express the weight values. If each edge has a weight, we use an array E _ W E I to stand for different weights. Based on our DI data structure, four class directed graph (GraphD), directed and weighted graph (GraphDW), undirected graph (GraphUD) and undirected and weighted graph (GraphUDW) are defined in Python. Four corresponding classes—SegGraphD, SegGrapDW, SegGraphUD, SegGraphUDW—are also defined in Chapel to present different kinds of graphs. In our class definition, directed graph is the base class. Then, undirected graph and directed and weighted graph are the two child classes. Finally, undirected and weighted graph will inherit from undirected graph.
All the graph classes, including their major attributes, are given in Table 2. The left columns are the Python classes and their descriptions. The right columns are the corresponding Chapel classes and their descriptions. Based on the graph class definition, we can develop different graph algorithms.

6.2. Triangle Counting Benchmark

To compare the performance of the exact method and our approximate method, we implement two triangle counting related functions. For the exact method, we implement the “graph_file_read” and “graph_triangle” Python functions to read a complete graph file and call our triangle counting kernel algorithm (Algorithm 1) to calculate the number of triangles.
For our approximate method, we implement the “stream_file_read” function to simulate the stream by reading a graph file’s edges one by one. The graph sketch introduced in Section 5.1 will be built automatically when we read the edges. Currently, we only implement the sliding window streaming’s sketch building. Automatically building the insert-only and insert-delete streaming will be future work after we evaluate our data structure and sketch with a larger collection of graph algorithms.
After the sketch of a sliding window stream is built in the memory, we will call on the “stream_tri_cnt” function to first calculate the exact number of triangles in each sub-sketch (it will also call on our kernel triangle counting Algorithm 1) and then use the regression model to estimate the total number of triangles in the given stream.
For the power law distribution regression model, in the benchmark we also implement single variable regression models using the maximum, minimum and median results of the three sub-sketches respectively. Our testing results show that the proposed regression method in Section 5.3 can achieve a better accuracy.

7. Experiments

7.1. Experimental Setup

Testing of the methods was conducted in an environment composed of a 32 node cluster with an FDR Infiniband between the nodes in the cluster. Infiniband connections between nodes is commonly found in high performance computers. Every node is made up of 512 GB of RAM and two 10-core Intel Xeon E5-2650 v3 @ 2.30 GHz processors with 20 cores total. Due to Arkouda being designed primarily for data analysis in an HPC setting, an architecture setup that aptly fits an HPC scenario was chosen for testing.

7.2. Datasets Description and Analysis

We will use graph files and read the edges line by line to simulate the sliding window of a graph stream. The graphs selected for testing were chosen from various publicly available datasets for the benchmarking of graph methods. These include datasets from the 10th DIMACS Implementation Challenge [40], as well as some real-life graph datasets from the Stanford Network Analysis Project (SNAP) http://snap.stanford.edu/data/ (accessed on 20 July 2021). Pertinent information about these datasets for the triangle counting method are displayed in Table 3.
Based on the graphs’ degree distribution, we divided them into two classes, normal distribution and power law distribution. Prior research has shown that some real-world graphs follow power law distributions. Normal distributions are also found ubiquitously in the natural and social sciences to represent real-valued random variables whose distributions are not known. For normal distributions, their typical parameters, the mean and standard deviation fitted from the given graphs, are listed in Table 3 too. Figure 5 shows the comparison between the fitting results and the original graph data. We can see that the fitting results can match with the original data very well. In other words, the Delaunay graphs follow the normal distribution.
For graphs that follow the power law distribution y ( x ) = a × x k , we also present their fitting parameters a and k in Table 3. Figure 6 provides some examples of these graphs. We can see that, in general, their signatures are close to power laws. However, we can also see that some data do not fit a power law distribution very well.

7.3. Experimental Results

7.3.1. Accuracy

In order to evaluate the accuracy of our method, we conducted experiments on nine normal distribution and 12 power law distribution benchmark graphs with shrink factors 4, 8, 16, 32, and 64. In total, we have 105 test cases. For each test, we will have three triangle numbers for the Head, Mid and Tail sketch partitions. We will use the exact number of triangles in the three sub-sketches to provide the approximate number of triangles in the given graph stream. The testing results are given in Table 1.
For the normal distribution, based on our regression model expressed as Equation (1), we can get the parameters and express the multivariate linear regression equation as:
T C N o r m a l = 0.7264 × E H + 1.4157 × E M + 1.7529 × E T .
When we compute the absolute value of all the percent error, the mean error is 4%, the max error is 25% and the R-squared value is 1, and this means that the sketch results are absolutely related to the final number of triangles.
We also evaluated our regression model to see how the accuracy changes with the shrinking factor. For a normal distribution, we built five regression models by selecting the shrinking factors as 4, 8, 16, 32, and 64, respectively. The result is shown in Figure 7. We can see that both the mean and max error will increase when we doubly increase the value of the shrinking factor step by step. However, the mean increases very slowly. The mean error changes from 1% to 5.97%. The max error will increase from 3.36% to 26.87%. This result shows that our regression model for normal distribution graphs is stable and very accurate. When the graph size shrinks to half of its previous size, there is a very small effect on the accuracy when the number of triangles is not less than some threshold value. This is important because we can use much smaller sketches to achieve very accurate results.
For the power law distribution, based on our ordered regression model in Equation (3) which is refined from the unordered regression model, the log value multivariate linear regression equation can be expressed as:
T C p o w e r l a w , l o g = 0.4464 × E m i n , l o g + 0.1181 × E m e d i a n , l o g + 1.4236 × E m a x , l o g .
The mean error is 3.2%. The max value is 7.2%. The R-squared value is 0.91. This means that the model can describe the data very well. If we use the unordered regression model, Equation (2), the mean error is 4.5%. The max value is 12.5%. R-squared is 0.81. All the results are worse than the results of the ordered regression model in Equation (3).
We also performed experiments to show the accuracy of ordered and unordered regression models when we built five different regression models using different shrinking factors of 4, 8, 16, 32, and 64. The experimental results are presented in Table 4 and we can make two conclusions from the results: (1) All the results of our ordered regression model are better than the results of the unordered regression model. This means that when we exploit more degree distribution information and integrate it into our model, we can achieve a better accuracy; (2) When we double the value of the shrinking factor, the change in accuracy is much smaller than the change in the normal regression model. This also means that, in the power law model, the accuracy is much less sensitive to the shrinking factor because the power law graph is highly skewed. This conclusion also means that we can use a relatively smaller shrinking factor to achieve almost the same accuracy.
The accuracy evaluation results show that the proposed regression model for normal distribution graphs and power law distribution graphs can achieve very accurate results. For both, the mean relative error is less than 4%. However, if we employ the normal regression model in Equation (1) to the power-law distribution data, the mean relative error can be as high as 26%, and the max relative error is 188%. This means that employing different regression models for different degree distribution graphs may significantly improve the accuracy.

7.3.2. Performance

Existing exact triangle counting algorithms show that, for very large graphs, the communication between different computing nodes will consume the majority of the time. Therefore, reducing the total communication, or improving data access locality during triangle counting, are the key factors for improving the final performance.
Since our triangle counting method can maintain load balancing and locality, it can achieve a high performance for large graph streams. Our experimental results show that our streaming triangle algorithm displays high locality. These results are provided in Table 5. We calculate all the local data access times and all the remote data access times during the triangle counting procedure. We define the locality access ratio L A R = N L N L + N R , where N L is the total number of local accesses and N R is the total number of remote accesses. For the three sub-sketches—Head, Mid and Tail—we calculate their LAR values, respectively. From the table we can see that, for all cases, the LAR value is larger than 50% and the average LAR value is 74%. This means that, during our triangle counting procedure, most of the data come from local memory. This is the major reason our method can achieve a high performance.
To show the speedup of the streaming process, we compare the execution time of the original complete graph triangle counting and the execution time of our sketch method. The experimental results are given in Table 6. From the table we can see that our streaming method uses much less space and runs much faster to estimate the number of triangles in large graph stream with only one pass. Since the experimental time is very long for large graphs, here we just present three different shrinking factors to show the trend. We can see that, for very large graphs, we can use a very large shrinking factor to significantly reduce the processing space, and the absolute speedup is also very high. However, the relative speedup compared with the shrinking factor is a little bit lower. For relatively small graphs, our shrinking factor is also relatively small, so the absolute speedup is also small but the relative speedup is high. Our experimental results show that the speedup will increase almost linearly with the shrinking factor.
We also use an even larger graph benchmark, com-Friendster, a network dataset that has 1,806,067,135 edges and 65,608,366 vertices https://snap.stanford.edu/data/ (accessed on 20 July 2021), and run it with two locales. When the shrinking factor is 32,768, it will take 44,719.4 s to build the sketch with a maximum of 235,332KB memory footprint. The triangle counting time is 22,368.2 s. When the shrinking factor is 65,536, it will take 44,593.0 s to build the sketch with a maximum of 235,288KB memory footprint. The triangle counting time is 22,296.0 s. If we use the exact method, also with two locales, the program will fail because it runs out of memory. This example really shows that our method can handle very large graphs in a limited memory footprint.

8. Conclusions

Interactive graph stream analysis is a challenging problem. In this paper, we present our solution to show how we can solve the problem in the open source framework Arkouda. The advantage of Arkouda lies in two aspects: high productivity and high performance. High productivity means that the end users can use a popular data science language such as Python to explore different graph streams. High performance means that the end users can break the limit of their laptop and personal computer’s capabilities in memory and calculation to handle very large graphs in an interactive way.
We design a double index data structure to support quick search from a given vertex to edges or from a given edge to vertices. This greatly reduces the total memory footprint for large sparse graph streams. Based on the proposed double index data structure, we develop an efficient multiple-partition stream sketch building method. Our sketch can support general stream query problems because our sketch uses a small graph to summarize a large graph. We define a shrinking factor to let users control the size of final sketch accurately. We exploit the ubiquitous normal and power law degree distributions of given graph streams to propose two regression models to estimate the results of the graph streams from their sketch partitions. Based on the double index data structure, we develop a multi-locale distributed triangle counting algorithm that can maintain load balancing and a high local data access ratio to improve the graph stream processing performance. All our methods have been integrated into the Arkouda framework and users can use a Jupyter Notebook to drive the interactive graph stream analytics pipeline easily.
Experimental results on two kinds of large sparse graph streams—normal distribution and power law distribution—show that the proposed method can achieve very high approximate results with a mean error no larger than 4%. The average local access ratio is 74% and the graph stream processing speedup increases almost linearly with the shrinking factor. This work shows that Arkouda is a promising framework for supporting large scale interactive graph stream analytics. Of course, the reported work is just the first step in supporting a typical graph query, triangle counting. In future work, we will provide more graph stream algorithms based on our sketch methodology and will further optimize the performance of our algorithms in Arkouda.

Author Contributions

Conceptualization and methodology, Z.D. and D.A.B.; software implementation, Z.D., O.A.R. and D.A.B.; validation, J.P.; writing—original draft preparation, Z.D., O.A.R. and D.A.B.; writing—review and editing, Z.D., O.A.R. and D.A.B.; funding acquisition, D.A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by NSF grant number CCF-2109988 (Bader).

Data Availability Statement

The data utilized in this project came from the following locations. The Delaunay datasets were acquired from the SuiteSparse Matrix Collection. The remaining real-world datasets were all acquired from Stanford’s Large Network Dataset Collection.

Acknowledgments

We appreciate Michael Merrill, William Reus and the Arkouda team’s help when we integrated this work into Arkouda.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McGregor, A. Graph stream algorithms: A survey. ACM Sigmod Rec. 2014, 43, 9–20. [Google Scholar] [CrossRef]
  2. Alon, N.; Matias, Y.; Szegedy, M. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci. 1999, 58, 137–147. [Google Scholar] [CrossRef] [Green Version]
  3. Manku, G.S.; Motwani, R. Approximate frequency counts over data streams. In VLDB’02: Proceedings of the 28th International Conference on Very Large Databases; Elsevier: Amsterdam, The Netherlands, 2002; pp. 346–357. [Google Scholar]
  4. Cormode, G.; Muthukrishnan, S. An improved data stream summary: The count-min sketch and its applications. J. Algorithms 2005, 55, 58–75. [Google Scholar] [CrossRef] [Green Version]
  5. Cohen, E.; Kaplan, H. Tighter estimation using bottom k sketches. Proc. VLDB Endow. 2008, 1, 213–224. [Google Scholar] [CrossRef] [Green Version]
  6. Zhao, P.; Aggarwal, C.C.; Wang, M. gSketch: On query estimation in graph streams. arXiv 2011, arXiv:1111.7167. [Google Scholar] [CrossRef]
  7. Tang, N.; Chen, Q.; Mitra, P. Graph stream summarization: From big bang to big crunch. In Proceedings of the 2016 International Conference on Management of Data (SIGMOD), San Francisco, CA, USA, 26 June–1 July 2016; pp. 1481–1496. [Google Scholar]
  8. Behrens, J.T. Principles and procedures of exploratory data analysis. Psychol. Methods 1997, 2, 131. [Google Scholar] [CrossRef]
  9. Good, I.J. The philosophy of exploratory data analysis. Philos. Sci. 1983, 50, 283–295. [Google Scholar] [CrossRef]
  10. Jebb, A.T.; Parrigon, S.; Woo, S.E. Exploratory data analysis as a foundation of inductive research. Hum. Resour. Manag. Rev. 2017, 27, 265–276. [Google Scholar] [CrossRef]
  11. Merrill, M.; Reus, W.; Neumann, T. Arkouda: Interactive data exploration backed by Chapel. In Proceedings of the ACM SIGPLAN 6th on Chapel Implementers and Users Workshop (CHIUW), Phoenix, AZ, USA, 22 June 2019; p. 28. [Google Scholar]
  12. Reus, W. CHIUW 2020 Keynote: Arkouda: Chapel-Powered, Interactive Supercomputing for Data Science. In Proceedings of the Chapel Implementers and Users Workshop (CHIUW), 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Online, 22 May 2020; p. 650. [Google Scholar]
  13. Al Hasan, M.; Dave, V.S. Triangle counting in large networks: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1226. [Google Scholar] [CrossRef]
  14. Dhulipala, L.; Blelloch, G.E.; Shun, J. Low-latency graph streaming using compressed purely-functional trees. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Phoenix, AZ, USA, 22–26 June 2019; pp. 918–934. [Google Scholar]
  15. Macko, P.; Marathe, V.J.; Margo, D.W.; Seltzer, M.I. Llama: Efficient graph analytics using large multiversioned arrays. In Proceedings of the 2015 IEEE 31st International Conference on Data Engineering (ICDE), Seoul, South Korea, 13–17 April 2015; pp. 363–374. [Google Scholar]
  16. Kumar, P.; Huang, H.H. Graphone: A data store for real-time analytics on evolving graphs. In Proceedings of the 17th USENIX Conference on File and Storage Technologies (FAST19), Boston, MA, USA, 25–28 February 2019; pp. 249–263. [Google Scholar]
  17. Vora, K.; Gupta, R.; Xu, G. Kickstarter: Fast and accurate computations on streaming graphs via trimmed approximations. In Proceedings of the 22nd International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), Xi’an, China, 8–12 April 2017; pp. 237–251. [Google Scholar]
  18. Mariappan, M.; Vora, K. GraphBolt: Dependency-driven synchronous processing of streaming graphs. In Proceedings of the Fourteenth European Conference on Computer Systems (EuroSys), Dresden, Germany, 25–28 March 2019; pp. 1–16. [Google Scholar]
  19. Mariappan, M.; Che, J.; Vora, K. DZiG: Sparsity-aware incremental processing of streaming graphs. In Proceedings of the Sixteenth European Conference on Computer Systems (EuroSys), Online, 26–28 April 2021; pp. 83–98. [Google Scholar]
  20. Wolf, M.M.; Deveci, M.; Berry, J.W.; Hammond, S.D.; Rajamanickam, S. Fast linear algebra-based triangle counting with KokkosKernels. In Proceedings of the 2017 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 12–14 September 2017; pp. 1–7. [Google Scholar]
  21. Yaşar, A.; Rajamanickam, S.; Wolf, M.; Berry, J.; Çatalyürek, Ü.V. Fast triangle counting using Cilk. In Proceedings of the 2018 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 25–27 September 2018; pp. 1–7. [Google Scholar]
  22. Bisson, M.; Fatica, M. Static graph challenge on GPU. In Proceedings of the 2017 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 12–14 September 2017; pp. 1–8. [Google Scholar]
  23. Pandey, S.; Li, X.S.; Buluc, A.; Xu, J.; Liu, H. H-index: Hash-indexing for parallel triangle counting on GPUs. In Proceedings of the 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 24–26 September 2019; pp. 1–7. [Google Scholar]
  24. Blanco, M.; Low, T.M.; Kim, K. Exploration of fine-grained parallelism for load balancing eager k-truss on GPU and CPU. In Proceedings of the 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 24–26 September 2019; pp. 1–7. [Google Scholar]
  25. Pearce, R. Triangle counting for scale-free graphs at scale in distributed memory. In Proceedings of the 2017 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 12–14 September 2017; pp. 1–4. [Google Scholar]
  26. Pearce, R.; Steil, T.; Priest, B.W.; Sanders, G. One quadrillion triangles queried on one million processors. In Proceedings of the 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 24–26 September 2019; pp. 1–5. [Google Scholar]
  27. Ghosh, S.; Halappanavar, M. TriC: Distributed-memory Triangle Counting by Exploiting the Graph Structure. In Proceedings of the 2020 IEEE High Performance Extreme Computing Conference (HPEC), Online, 21–25 September 2020; pp. 1–6. [Google Scholar]
  28. Pagh, R.; Tsourakakis, C.E. Colorful triangle counting and a MapReduce implementation. Inf. Process. Lett. 2012, 112, 277–281. [Google Scholar] [CrossRef] [Green Version]
  29. Bar-Yossef, Z.; Kumar, R.; Sivakumar, D. Reductions in streaming algorithms, with an application to counting triangles in graphs. SODA 2002, 2, 623–632. [Google Scholar]
  30. Pavan, A.; Tangwongsan, K.; Tirthapura, S.; Wu, K.L. Counting and Sampling Triangles from a Graph Stream. Proc. VLDB Endow. 2013, 6, 1870–1881. [Google Scholar] [CrossRef] [Green Version]
  31. Braverman, V.; Ostrovsky, R.; Vilenchik, D. How hard is counting triangles in the streaming model? In Proceedings of the International Colloquium on Automata, Languages, and Programming (ICALP), Riga, Latvia, 8–12 July 2013; pp. 244–254. [Google Scholar]
  32. Buriol, L.S.; Frahling, G.; Leonardi, S.; Marchetti-Spaccamela, A.; Sohler, C. Counting triangles in data streams. In Proceedings of the 25th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS), Chicago, IL, USA, 26–28 June 2006; pp. 253–262. [Google Scholar]
  33. Jha, M.; Seshadhri, C.; Pinar, A. A space efficient streaming algorithm for triangle counting using the birthday paradox. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Chicago, IL, USA, 11–14 August 2013; pp. 589–597. [Google Scholar]
  34. Rossum, G. Python Reference Manual; Centre for Mathematics and Computer Science (CWI): Amsterdam, The Netherlands, 1995. [Google Scholar]
  35. Chamberlain, B.L.; Ronaghan, E.; Albrecht, B.; Duncan, L.; Ferguson, M.; Harshbarger, B.; Iten, D.; Keaton, D.; Litvinov, V.; Sahabu, P.; et al. Chapel comes of age: Making scalable programming productive. In Proceedings of the Cray User Group (CUG), Stockholm, Sweden, 22–24 May 2018. [Google Scholar]
  36. Hintjens, P. ZeroMQ: Messaging for Many Applications; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2013. [Google Scholar]
  37. Stephen, A.T.; Toubia, O. Explaining the power-law degree distribution in a social commerce network. Soc. Netw. 2009, 31, 262–270. [Google Scholar] [CrossRef]
  38. Adamic, L.A.; Huberman, B.A.; Barabási, A.; Albert, R.; Jeong, H.; Bianconi, G. Power-law distribution of the world wide web. Science 2000, 287, 2115. [Google Scholar] [CrossRef] [Green Version]
  39. Faloutsos, M.; Faloutsos, P.; Faloutsos, C. On power-law relationships of the internet topology. ACM Sigcomm Comput. Commun. Rev. 1999, 29, 251–262. [Google Scholar] [CrossRef]
  40. Bader, D.A.; Meyerhenke, H.; Sanders, P.; Wagner, D. (Eds.) Graph Partitioning and Graph Clustering. In Proceedings of the 10th DIMACS Implementation Challenge Workshop, Georgia Institute of Technology, Atlanta, GA, USA, 13–14 February 2012; Contemporary Mathematics; American Mathematical Society: Providence, RI, USA, 2013; Volume 588. [Google Scholar] [CrossRef]
Figure 1. Overview of Arkouda’s components and working mechanism.
Figure 1. Overview of Arkouda’s components and working mechanism.
Algorithms 14 00221 g001
Figure 2. Double Index data structure for graph stream sketch.
Figure 2. Double Index data structure for graph stream sketch.
Algorithms 14 00221 g002
Figure 3. Edge based Sparse Graph Partition.
Figure 3. Edge based Sparse Graph Partition.
Algorithms 14 00221 g003
Figure 4. Mapping a graph stream into a multi-partition sketch.
Figure 4. Mapping a graph stream into a multi-partition sketch.
Algorithms 14 00221 g004
Figure 5. Normal distribution graphs.
Figure 5. Normal distribution graphs.
Algorithms 14 00221 g005
Figure 6. Power law distribution graphs.
Figure 6. Power law distribution graphs.
Algorithms 14 00221 g006
Figure 7. How accuracy changes with shrinking factor for normal and power law distribution graphs.
Figure 7. How accuracy changes with shrinking factor for normal and power law distribution graphs.
Algorithms 14 00221 g007
Table 1. Sketch Triangle Number.
Table 1. Sketch Triangle Number.
Graph NameShrinking FactorHeadMidTailExact
delaunay_n104981381502047
delaunay_n1086372802046
delaunay_n10163541412044
delaunay_n10322012162043
delaunay_n106433102042
delaunay_n1141953022944104
delaunay_n118951511444103
delaunay_n11166271634101
delaunay_n11322632294100
delaunay_n11641224144009
delaunay_n1242586075758215
delaunay_n1281633062938214
delaunay_n1216901391478212
delaunay_n12325278768211
delaunay_n12645142328210
delaunay_n1346461160119716,442
delaunay_n13833559858816,441
delaunay_n131618729730416,439
delaunay_n133210814415316,438
delaunay_n136468628116,437
delaunay_n14410992356229932,921
delaunay_n1485071205118432,920
delaunay_n141626956857932,918
delaunay_n143213529430332,917
delaunay_n14649314413732,916
delaunay_n15420854571458565,872
delaunay_n1589432335232065,871
delaunay_n15164861140115965,869
delaunay_n153226556858065,868
delaunay_n156413728229965,867
delaunay_n164416896489510131,842
delaunay_n168217046664601131,841
delaunay_n1616106323872348131,839
delaunay_n163256911741169131,838
delaunay_n1664326576580131,837
delaunay_n174815418,87318,938263,620
delaunay_n178401295439501263,619
delaunay_n1716213245954773263,617
delaunay_n1732104222882347263,616
delaunay_n176453411561180263,615
delaunay_n18416,90437,77438,061527,234
delaunay_n188815519,00519,107527,233
delaunay_n1816417694549562527,231
delaunay_n1832209746744686527,230
delaunay_n1864100423692348527,229
amazon44944638318,381667,259
amazon8142717527351667,258
amazon166216173614667,256
amazon323312902250667,255
amazon641771531546667,254
dblp420,26941,390168,9962,225,882
dblp8687617,04367,7822,225,881
dblp162684484620,3102,225,879
dblp321097272611,4562,225,878
dblp64545232526782,225,877
ca-HepTh.txt412671072385328,677
ca-HepTh.txt8642591174128,676
ca-HepTh.txt1645639048028,674
ca-HepTh.txt3239126721928,673
ca-HepTh.txt6426021715728,672
ca-CondMat.txt4796010,28012,051174,578
ca-CondMat.txt8426449705543174,577
ca-CondMat.txt16286228472655174,575
ca-CondMat.txt32188816931403174,574
ca-CondMat.txt6411791292845174,573
ca-AstroPh.txt475,59478,23478,8551,374,119
ca-AstroPh.txt841,33147,38740,3951,374,118
ca-AstroPh.txt1624,82828,24420,4681,374,116
ca-AstroPh.txt3214,98416,69211,1281,374,115
ca-AstroPh.txt646450650763221,374,114
email-Enron.txt470,07728,78325,933727,044
email-Enron.txt822,33113,59315,197727,043
email-Enron.txt1611,64083815002727,041
email-Enron.txt32668152122392727,040
email-Enron.txt64356129121056727,039
ca-GrQc.txt411933374530048,265
ca-GrQc.txt86712131109648,264
ca-GrQc.txt1634688073348,262
ca-GrQc.txt3227669438548,261
ca-GrQc.txt646422115048,260
ca-HepPh.txt4118,54074,598187,0033,345,241
ca-HepPh.txt844,93733,40464,2813,345,240
ca-HepPh.txt1616,77516,42635,8173,345,238
ca-HepPh.txt323708664387243,345,237
ca-HepPh.txt64705370728993,345,236
loc-brightKite_edges.txt412,23244352489301,812
loc-brightKite_edges.txt8526632171652301,811
loc-brightKite_edges.txt1625791673927301,809
loc-brightKite_edges.txt3210611335380301,808
loc-brightKite_edges.txt64552856280301,807
Table 2. Double-Index Sparse Graph Class Definition in Python and Chapel.
Table 2. Double-Index Sparse Graph Class Definition in Python and Chapel.
PythonChapel
ClassAttribute and DescriptionClassAttribute and Description
GraphDn_vertices: number of verticesSegGraphD n _ v e r t i c e s : number of vertices
n _ e d g e s : number of edges n _ e d g e s : number of edges
d i r e c t e d : directed graph or not d i r e c t e d : directed graph or not
w e i g h t e d : weighted graph or not w e i g h t e d : weighted graph or not
s r c : the source of every edge in the graph s r c N a m e / s r c : the name and data of source array
d s t : the destination of every edge in the graph d s t N a m e / d s t : the name and data instance of the destination array
s t a r t _ i : starting index of all the vertices in s r c s t a r t N a m e / s t a r t _ i : the name and data instance of the starting index array
n e i g h b o r : number of neighbors all the vertices n e i g h b o r N a m e / n e i g h b o r : the name and data instance of the neighbor array
GraphDW(GraphD) v _ w e i g h t : weight of vertexSegGraphDW:SegGraphD v _ w e i g h t N a m e / v _ w e i g h t : name and data instance of the vertex weight array
e _ w e i g h t : weight of edge e _ w e i g h t N a m e / e _ w e i g h t : name and data instance of the edge array
GraphUD(GraphD) s r c R : the source of every reverse edge in the graphSegGraphUD:SegGraphD s r c N a m e R / s r c R : name and data instance of the reverse source array
d s t R : the destination of every reverse edge in the graph d s t N a m e R / d s t R : the name and data instance of the reverse destination array
s t a r t _ i R : starting index of all the vertices in reverse s r c s t a r t N a m e R / s t a r t i R : the name and data of the reverse starting index array
n e i g h b o r R : number of neighbors all the vertices in s r c R n e i g h b o r N a m e R : the name and data of the reverse neighbor array
GraphUDW(GraphUD) v _ w e i g h t : weight of vertexSegGraphUDW:SegGraphUD v _ w e i g h t N a m e / v _ w e i g h t : the name and data of the vertex weight array
e _ w e i g h t : weight of edge e _ w e i g h t N a m e : the name and data of the edge array
Table 3. Graph benchmarks utilized for the triangle counting algorithm.
Table 3. Graph benchmarks utilized for the triangle counting algorithm.
Degree DistributionGraph NameEdgesVerticesMeanStandard Deviation
Normaldelaunay_n10305610245.968753.10021
delaunay_n11612720485.983403.76799
delaunay_n1212,26440965.988283.49865
delaunay_n1324,54781925.992923.25055
delaunay_n1449,12216,3845.996343.30372
delaunay_n1598,27432,7685.998173.37514
delaunay_n16196,57565,5365.998993.35309
delaunay_n17393,176131,0725.999383.26819
delaunay_n18786,396262,1445.999723.2349
Degree DistributionGraph NameEdgesVerticesak
power lawamazon925,872334,8631,920,8772.81
dblp1,049,866317,0803,299,616.652.71
youtube2,987,6241,134,890701,179.831.58
lj34,681,1893,997,96250,942,065.492.40
orkut117,185,0833,072,44140,351,890.912.12
ca-HepTh51,971987723,538.92.29
ca-CondMat186,93623,13368,923.62.23
ca-AstroPh396,16018,77246,698.31.84
email-Enron367,66236,69210,150.21.54
ca-GrQc28,98052426610.352.04
ca-HepPh237,01012,0084601.721.44
loc-brightkite_edges214,07858,22844,973.11.88
Table 4. Comparison between Ordered and Unordered Regression Models for Power Law Distribution.
Table 4. Comparison between Ordered and Unordered Regression Models for Power Law Distribution.
MetricsAbsMaxErrorAbsMeanErrorR-Squared
Regression MethodsOrderedUnorderedOrderedUnorderedOrderedUnordered
Shrink Factor  46.76%8.26%2.84%3.75%91.42%85.10 %
86.08%8.89%3.05%4.13%91.59%82.90%
164.16%8.66%1.97%3.64%96.47%87.85%
326.67%10.80%3.02%4.02%90.85%82.73%
644.66%6.68%2.86%3.02%94.12%92.04%
Table 5. Local Access Ratio of different stream graphs.
Table 5. Local Access Ratio of different stream graphs.
FilenameHead Local Access RatioMid Local Access RatioTail Local Access Ratio
delaunay_n100.7363640.750.642045
delaunay_n110.5786350.7898940.707775
delaunay_n120.7372260.9068320.878136
delaunay_n130.7106510.9029130.91839
delaunay_n140.6610840.9140580.941802
delaunay_n150.6991930.9639690.947161
delaunay_n160.6859310.9686720.953034
delaunay_n170.6928440.9728790.963517
delaunay_n180.6939350.9830280.975103
ca-HepTh.txt0.6873430.6458730.650115
ca-CondMat.txt0.6301540.6637010.6764
ca-AstroPh.txt0.617350.6156870.615749
email-Enron.txt0.3767730.6324430.785357
ca-GrQc.txt0.7214210.6714290.687907
ca-HepPh.txt0.6433760.6487180.593803
loc-brightkite_edges.txt0.713380.6033890.721972
amazon0.7316950.7230970.738042
dblp0.6051140.7117480.830386
youtube0.6697360.6716710.9342
lj0.5706210.7283830.993926
orkut0.5568680.5698220.861902
Table 6. Speedup of Stream graph processing.
Table 6. Speedup of Stream graph processing.
Graph NameFactorStream Time (s)Exact Time (s)Speedup
delaunay_n1242.046892.720481.33
delaunay_n1281.89692.720481.43
delaunay_n12161.776042.720481.53
delaunay_n1342.678144.705961.76
delaunay_n1382.272574.705962.07
delaunay_n13162.02584.705962.32
delaunay_n1443.943248.917932.26
delaunay_n1483.042758.917932.93
delaunay_n14162.600598.917933.43
delaunay_n1546.3479517.43022.75
delaunay_n1584.4478317.43023.92
delaunay_n15163.661917.43024.76
delaunay_n16411.213534.24333.05
delaunay_n1687.2933934.24334.70
delaunay_n16165.6582934.24336.05
delaunay_n17421.008468.48823.26
delaunay_n17813.083368.48825.23
delaunay_n17169.7168368.48827.05
delaunay_n18440.4338135.1913.34
delaunay_n18824.8456135.1915.44
delaunay_n181617.8642135.1917.57
delaunay_n183214.1647135.1919.54
delaunay_n186412.0415135.19111.23
delaunay_n193226.6216270.49110.16
delaunay_n196422.5967270.49111.97
delaunay_n1912820.9203270.49112.93
delaunay_n203251.7499536.33110.36
delaunay_n206444.3933536.33112.08
delaunay_n2012840.2489536.33113.33
delaunay_n2132104.2361076.7710.33
delaunay_n216487.01961076.7712.37
delaunay_n2112880.71491076.7713.34
delaunay_n2232208.4782155.8510.34
delaunay_n2264177.6372155.8512.14
delaunay_n22128158.132155.8513.63
delaunay_n2364352.5314356.98212.36
delaunay_n23128317.0534356.98213.74
delaunay_n23256301.2954356.98214.46
delaunay_n24128645.0028708.375613.50
delaunay_n24256604.9818708.375614.39
delaunay_n24512598.498708.375614.55
amazon1620.4202147.9267.24
amazon3216.2884147.9269.08
amazon6414.0285147.92610.54
ca-AstroPh416.165733.15782.05
ca-AstroPh811.019333.15783.01
ca-AstroPh168.3806133.15783.96
ca-CondMat48.7011416.57581.91
ca-CondMat86.2007616.57582.67
ca-CondMat164.8659916.57583.41
ca-GrQc42.657493.236131.22
ca-GrQc82.207443.236131.47
ca-GrQc162.021193.236131.60
ca-HepPh410.26321.30772.08
ca-HepPh86.9896921.30773.05
ca-HepPh165.4566621.30773.90
ca-HepPh324.8352421.30774.41
ca-HepPh644.324621.30774.93
ca-HepTh43.785545.374711.42
ca-HepTh82.926635.374711.84
ca-HepTh162.462995.374712.18
dblp1622.0434163.0927.40
dblp3217.8394163.0929.14
dblp6415.4684163.09210.54
email-Enron416.13363.18883.92
email-Enron810.85963.18885.82
email-Enron168.0487563.18887.85
lj512409.6726937.416.93
lj1024393.5186937.417.63
lj2048397.5836937.417.45
loc-brightkite_edges414.111130.18922.14
loc-brightkite_edges810.159730.18922.97
loc-brightkite_edges168.0985430.18923.73
youtube51236.2397951.09426.24
youtube102434.9246951.09427.23
youtube204834.6978951.09427.41
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Du, Z.; Rodriguez, O.A.; Patchett, J.; Bader, D.A. Interactive Graph Stream Analytics in Arkouda. Algorithms 2021, 14, 221. https://doi.org/10.3390/a14080221

AMA Style

Du Z, Rodriguez OA, Patchett J, Bader DA. Interactive Graph Stream Analytics in Arkouda. Algorithms. 2021; 14(8):221. https://doi.org/10.3390/a14080221

Chicago/Turabian Style

Du, Zhihui, Oliver Alvarado Rodriguez, Joseph Patchett, and David A. Bader. 2021. "Interactive Graph Stream Analytics in Arkouda" Algorithms 14, no. 8: 221. https://doi.org/10.3390/a14080221

APA Style

Du, Z., Rodriguez, O. A., Patchett, J., & Bader, D. A. (2021). Interactive Graph Stream Analytics in Arkouda. Algorithms, 14(8), 221. https://doi.org/10.3390/a14080221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop