Get 2 the Core25 Jun - 23 Jul
The Newcrest Crowd
This competition has finished.
Automatically detect drill core tray outlines in core photography
Core photography is a rich source of geological information that contains important textural, mineralogical and geotechnical information. Currently, core photography is underutilized in exploration and mining due to inconsistencies in the data and the arduous task of transforming historical photography into the cropped and depth-registered form required for integration with other geological data.
Photographs taken on Newcrest sites are now mostly standardised, but we have millions of historic core images from 10s of thousands of drill holes. As a new generation of image analysis techniques is becoming more powerful and prevalent within exploration and mining, these large image repositories will eventually become rich sources of quantitative data.
The dataset is available at the bottom of this page.
Cropping core photography is an arduous task when done manually. When the core trays aren’t in a locked position relative to the camera (as is the case for much historical core photography), cropping marks need to be manually adjusted in each image. Depending on the quality of the core photography, an individual drill hole could take up to an hour to manually crop. If the exact extents of the core rows can be automatically defined, then this could result in an 80 – 90% decrease in the time spent cropping drill core photography.
The challenge - $10,000 for the top score
The challenge is to build an algorithm that can determine and map the spatial extents of core tray rows and distinguish between different aspect ratios. A separate prize will be awarded for solutions that also solve the problem but do not exactly fit these scoring requirements.
This challenge is to submit masking instructions for a set of core tray images. We are providing you with a training dataset of images and completed masking instructions, the ground truth. The test data set consists of images only, for which you need to predict the masking instructions via a CSV file. This file can automatically be scored against the existing, but hidden masking instructions.
Critically, the solution needs to be able to perform on inconsistent photography where:
- Core boxes can be made from different materials (wood, steel, cardboard, plastic);
- Images are highly variable in terms resolution, aspect ratio and quality;
- The relative position of the core tray within the image can be variable.
Scoring file format:
The submission structure should follow the structure of the sample submission (provided as part of the dataset on commencement of the competition). That is there should be a column named OutputID that contains the name of the test file. For each image, there are several column headings for the prediction of the coordinates of the bounding box. The naming convention for these columns are core tray row<row_number>_<coordinate(x or y)>_<coordinate number 1 to 4>, there are up to 11 rows in one image, and each row requires 4 coordinates, each coordinate requires an x and a y location. If a row is not present in the ground truth image, it’s predictions are ignored and do not affect the loss. There are also columns for your prediction of the different class types. Note that this is based on the presence of at least one of the features in the image. The x and y coordinates directly correspond with the pixels of an image. We will provide instructions to easily determine those coordinates.
The different features that the image can contain are:
Type_0 – Rows appear nicely positioned in the images with no distortion. These are “straight” trays.
Type_1 – Rows appears slightly bent or misaligned in the image
Type_2 – Image appears slightly distorted
Type_3 – Rows are bent and also distorted
Type_4 – A low aspect ratio exists
Type_5 – A low aspect ratio and also slightly bent.
The order of images in the submission file can be random, as long as the correct Output_ID is added as the last column of the respective CSV row. An example submission file will be supplied to clarify this.
The labels and bounding boxes were created by 3 human geologists who did their best attempt at cropping each core row and labelling into one of the 6 labels. Due to the manual mark up, the labels may not be perfectly consistent across the entire dataset. However, these labels represent the kind of work required to produce an outcome similar to which we are looking to achieve with this competition, so achieving a good outcome in this competition will result in a huge saving of man power.
The scoring is a combination of 2 loss functions, an IoU (Intersection over Union) loss to determine the accuracy of the bounding box predictions and a negative log likelihood for the multi-class multi-label classification task. The tasks are weighted such that 90% of the score comes from solving the bounding box prediction task.
Intersection over Union (IoU)
We use the intersection over union metric to determine the score of each core tray row's (row) bounding box prediction. The intersection is calculated for each row, so the prediction of the bounding box in your submission file for row 1 is compared to the true value for the bounding box of row 1. If your submission file contains the correct coordinates for row 1 but the coordinates are entered in the fields of row 2, then your score for that row 1 will be 0. Thus, you must place the correct estimates in the correct order in the submission. To determine the polygon of your submission for a particular row, the convex hull of the submissions coordinates is taken. This means that you do not need to specify your coordinates for any given row in any particular order (i.e. left to right, top to bottom) as the scoring system will infer a polygon from your four coordinates.
To calculate the IoU score we first calculate the area of intersection of your prediction with the ground truth label ( ) and the total area of intersection of the two polygons (). We then compute the following for each row:
Additional core tray rows that are predicted, but not present in the image will be ignored by the scoring system.
Open Tech Prize - $ 5,000 for the top solution
You also have an additional chance to win the Open Tech Prize by submitting an alternative approach which solves the underlying problem of cropping images. You can submit your solution by attaching a presentation, report or video to the final submission. The main prize and the Open Tech prize are independent of each other, which means you are not required to submit a score or solution for one prize to qualify for the other.
Your submission will be judged qualitatively by an expert judging panel of the sponsor based on the following criteria:
- Implementation (how easy could the solution be implemented)
- Ability to capture geometrical complexity