Loading notebooks/VisualizingOutputsExample.ipynb +17 −77 Original line number Diff line number Diff line %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python import os # get file path import sys sys.path.insert(0, os.path.abspath('../..')) from scipy.misc import bytescale # store image array import autocnet from autocnet.examples import get_path # get file path from autocnet.fileio.io_gdal import GeoDataset # set handle, get image as array from autocnet.graph.network import CandidateGraph #construct adjacency graph from autocnet.matcher import feature_extractor as fe # extract features from image from autocnet.matcher.matcher import FlannMatcher # match features between images from autocnet.utils import visualization as vis ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python # display graphs in separate window to be able to change size %pylab qt4 # displays graphs in noteboook # %pylab inline ``` %% Output Populating the interactive namespace from numpy and matplotlib %% Cell type:markdown id: tags: Set up for visualization : Construct an adjacency graph with features extracted ----------------------------------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python adjacency_dict = {"../examples/Apollo15/AS15-M-0297_SML.png" : ["../examples/Apollo15/AS15-M-0298_SML.png"], "../examples/Apollo15/AS15-M-0298_SML.png" : ["../examples/Apollo15/AS15-M-0297_SML.png"]} adjacencyGraph = CandidateGraph.from_adjacency(adjacency_dict) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python n = adjacencyGraph.node[0] n.convex_hull_ratio() ``` %% Output --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-4-1127c95d25db> in <module>() 1 n = adjacencyGraph.node[0] ----> 2 n.convex_hull_ratio() /Users/jlaura/github/autocnet/autocnet/graph/network.py in convex_hull_ratio(self) 97 ideal_area = self.handle.pixel_area 98 if not hasattr(self, 'keypoints'): ---> 99 raise AttributeError('Keypoints must be extracted already, they have not been.') 100 101 ratio = convex_hull_ratio(keypoints, ideal_area) AttributeError: Keypoints must be extracted already, they have not been. %% Cell type:code id: tags: ``` python adjacencyGraph.edge[0][1] ``` %% Output <autocnet.graph.network.Edge at 0x11b570b70> %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python adjacencyGraph.extract_features(method='sift', extractor_parameters={'nfeatures':25}) imageName1 = adjacencyGraph.node[0]['image_name'] imageName2 = adjacencyGraph.node[1]['image_name'] print(imageName1) print(imageName2) ``` %% Cell type:markdown id: tags: Use visualization utility plotFeatures() to plot the features of a single image ----------------------------------------------------------------------------------------- In this example, we plot both images to open in separate windows 1. Features found in AS15-M-0298_SML.png 2. Features found in AS15-M-0297_SML.png %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(0) keypoints1 = adjacencyGraph.get_keypoints(imageName1) vis.plotFeatures(imageName1, keypoints1) plt.figure(1) keypoints2 = adjacencyGraph.get_keypoints(imageName2) vis.plotFeatures(imageName2, keypoints2) plt.show() ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python hull = adjacencyGraph.covered_area(1) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(0) keypoints2 = adjacencyGraph.get_keypoints(imageName2) vis.plotFeatures(imageName1, keypoints2) kp2 = np.empty((len(keypoints2), 2)) for i, j in enumerate(keypoints2): kp2[i] = j.pt[0], j.pt[1] plt.plot(kp2[hull.vertices,0], kp2[hull.vertices,1], 'r--', lw=2) print(adjacencyGraph.node[0]['handle'].pixel_area) print(hull.volume / adjacencyGraph.node[0]['handle'].pixel_area ) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python print(hull.volume) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python type(adjacencyGraph[0][1]) adjacencyGraph[0][1] type(adjacencyGraph.edge[0][1]) print(adjacencyGraph.edge[0][1]) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close(0) plt.close(1) ``` %% Cell type:markdown id: tags: Use visualization utility plotAdjacencyGraphFeatures() to plot the features on all images of the graph in a single figure. -------------------------------------------------------------------------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python vis.plotAdjacencyGraphFeatures(adjacencyGraph, featurePointSize=7) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close() ``` %% Cell type:markdown id: tags: Set up for visualization : Find matches in Adjacency Graph ----------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python # Apply a FLANN matcher matcher = FlannMatcher() # Loop through the nodes on the graph and feature descriptors to the matcher for node, attributes in adjacencyGraph.nodes_iter(data=True): matcher.add(attributes['descriptors'], key=node) # build KD-Tree using the feature descriptors matcher.train() # Loop through the nodes on the graph to find all features that match at 1 neighbor # These matches are returned as PANDAS dataframes and added to the adjacency graph for node, attributes in adjacencyGraph.nodes_iter(data=True): descriptors = attributes['descriptors'] matches = matcher.query(descriptors, node, k=2) adjacencyGraph.add_matches(matches) ``` %% Cell type:markdown id: tags: Use visualization utility plotAdjacencyGraphMatches() to plot the matches between two images of the graph in a single figure. ------------------------------------------------------------------------------------------------------------------------------------ %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python vis.plotAdjacencyGraphMatches(imageName1, imageName2, adjacencyGraph, aspectRatio=0.44, featurePointSize=3, lineWidth=1, saveToFile='myimage.png') plt.figure(0) img = plt.imread('myimage.png') plt.imshow(img) vis.plotAdjacencyGraphMatches(imageName1, imageName2, adjacencyGraph, aspectRatio=0.44, featurePointSize=10, lineWidth=3, saveToFile='myimage.png') plt.figure(1) img = plt.imread('myimage.png') plt.imshow(img) ``` %% Cell type:markdown id: tags: Below is an earlier attempt at plotting images within the same display box.<br> Features are plotted.<br> Lines are not drawn. %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(2) vis.plotAdjacencyGraphMatchesSingleDisplay(imageName1, imageName2, adjacencyGraph) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close(0) plt.close(1) plt.close(2) ``` %% Cell type:code id: tags: ``` python ``` Loading
notebooks/VisualizingOutputsExample.ipynb +17 −77 Original line number Diff line number Diff line %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python import os # get file path import sys sys.path.insert(0, os.path.abspath('../..')) from scipy.misc import bytescale # store image array import autocnet from autocnet.examples import get_path # get file path from autocnet.fileio.io_gdal import GeoDataset # set handle, get image as array from autocnet.graph.network import CandidateGraph #construct adjacency graph from autocnet.matcher import feature_extractor as fe # extract features from image from autocnet.matcher.matcher import FlannMatcher # match features between images from autocnet.utils import visualization as vis ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python # display graphs in separate window to be able to change size %pylab qt4 # displays graphs in noteboook # %pylab inline ``` %% Output Populating the interactive namespace from numpy and matplotlib %% Cell type:markdown id: tags: Set up for visualization : Construct an adjacency graph with features extracted ----------------------------------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python adjacency_dict = {"../examples/Apollo15/AS15-M-0297_SML.png" : ["../examples/Apollo15/AS15-M-0298_SML.png"], "../examples/Apollo15/AS15-M-0298_SML.png" : ["../examples/Apollo15/AS15-M-0297_SML.png"]} adjacencyGraph = CandidateGraph.from_adjacency(adjacency_dict) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python n = adjacencyGraph.node[0] n.convex_hull_ratio() ``` %% Output --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-4-1127c95d25db> in <module>() 1 n = adjacencyGraph.node[0] ----> 2 n.convex_hull_ratio() /Users/jlaura/github/autocnet/autocnet/graph/network.py in convex_hull_ratio(self) 97 ideal_area = self.handle.pixel_area 98 if not hasattr(self, 'keypoints'): ---> 99 raise AttributeError('Keypoints must be extracted already, they have not been.') 100 101 ratio = convex_hull_ratio(keypoints, ideal_area) AttributeError: Keypoints must be extracted already, they have not been. %% Cell type:code id: tags: ``` python adjacencyGraph.edge[0][1] ``` %% Output <autocnet.graph.network.Edge at 0x11b570b70> %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python adjacencyGraph.extract_features(method='sift', extractor_parameters={'nfeatures':25}) imageName1 = adjacencyGraph.node[0]['image_name'] imageName2 = adjacencyGraph.node[1]['image_name'] print(imageName1) print(imageName2) ``` %% Cell type:markdown id: tags: Use visualization utility plotFeatures() to plot the features of a single image ----------------------------------------------------------------------------------------- In this example, we plot both images to open in separate windows 1. Features found in AS15-M-0298_SML.png 2. Features found in AS15-M-0297_SML.png %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(0) keypoints1 = adjacencyGraph.get_keypoints(imageName1) vis.plotFeatures(imageName1, keypoints1) plt.figure(1) keypoints2 = adjacencyGraph.get_keypoints(imageName2) vis.plotFeatures(imageName2, keypoints2) plt.show() ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python hull = adjacencyGraph.covered_area(1) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(0) keypoints2 = adjacencyGraph.get_keypoints(imageName2) vis.plotFeatures(imageName1, keypoints2) kp2 = np.empty((len(keypoints2), 2)) for i, j in enumerate(keypoints2): kp2[i] = j.pt[0], j.pt[1] plt.plot(kp2[hull.vertices,0], kp2[hull.vertices,1], 'r--', lw=2) print(adjacencyGraph.node[0]['handle'].pixel_area) print(hull.volume / adjacencyGraph.node[0]['handle'].pixel_area ) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python print(hull.volume) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python type(adjacencyGraph[0][1]) adjacencyGraph[0][1] type(adjacencyGraph.edge[0][1]) print(adjacencyGraph.edge[0][1]) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close(0) plt.close(1) ``` %% Cell type:markdown id: tags: Use visualization utility plotAdjacencyGraphFeatures() to plot the features on all images of the graph in a single figure. -------------------------------------------------------------------------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python vis.plotAdjacencyGraphFeatures(adjacencyGraph, featurePointSize=7) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close() ``` %% Cell type:markdown id: tags: Set up for visualization : Find matches in Adjacency Graph ----------------------------------------------------------------- %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python # Apply a FLANN matcher matcher = FlannMatcher() # Loop through the nodes on the graph and feature descriptors to the matcher for node, attributes in adjacencyGraph.nodes_iter(data=True): matcher.add(attributes['descriptors'], key=node) # build KD-Tree using the feature descriptors matcher.train() # Loop through the nodes on the graph to find all features that match at 1 neighbor # These matches are returned as PANDAS dataframes and added to the adjacency graph for node, attributes in adjacencyGraph.nodes_iter(data=True): descriptors = attributes['descriptors'] matches = matcher.query(descriptors, node, k=2) adjacencyGraph.add_matches(matches) ``` %% Cell type:markdown id: tags: Use visualization utility plotAdjacencyGraphMatches() to plot the matches between two images of the graph in a single figure. ------------------------------------------------------------------------------------------------------------------------------------ %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python vis.plotAdjacencyGraphMatches(imageName1, imageName2, adjacencyGraph, aspectRatio=0.44, featurePointSize=3, lineWidth=1, saveToFile='myimage.png') plt.figure(0) img = plt.imread('myimage.png') plt.imshow(img) vis.plotAdjacencyGraphMatches(imageName1, imageName2, adjacencyGraph, aspectRatio=0.44, featurePointSize=10, lineWidth=3, saveToFile='myimage.png') plt.figure(1) img = plt.imread('myimage.png') plt.imshow(img) ``` %% Cell type:markdown id: tags: Below is an earlier attempt at plotting images within the same display box.<br> Features are plotted.<br> Lines are not drawn. %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.figure(2) vis.plotAdjacencyGraphMatchesSingleDisplay(imageName1, imageName2, adjacencyGraph) ``` %% Cell type:code id: tags: %% Cell type:markdown id: tags: ``` python plt.close(0) plt.close(1) plt.close(2) ``` %% Cell type:code id: tags: ``` python ```