How to compute mean average precision?How to merge two dictionaries in a single expression?How do I efficiently iterate over each entry in a Java Map?How do I sort a list of dictionaries by a value of the dictionary?How can I initialise a static Map?How do I sort a dictionary by value?How to remove a key from a Python dictionary?Yolo v1 bounding boxes during training stepUsing pure numpy metric as metric in Keras/TensorFlowConfusions regarding RetinaNetIoU. How can I calculate the true positive rate for an object detection algorithm, where I can have multiple objects per image?

How do I keep a running total of data in a column in Excel?

Automorphisms and epimorphisms of finite groups

Fully submerged water bath for stove top baking?

What verb for taking advantage fits in "I don't want to ________ on the friendship"?

Find the closest three-digit hex colour

Is it possible to alias a column based on the result of a select+where?

Dynamic Sql Query - how to add an int to the code?

Why would Dementors torture a Death Eater if they are loyal to Voldemort?

How does the 'five minute adventuring day' affect class balance?

Why should I allow multiple IPs on a website for a single session?

he and she - er und sie

Having to constantly redo everything because I don't know how to do it

Customs and immigration on a USA-UK-Sweden flight itinerary

Why am I getting an electric shock from the water in my hot tub?

Installed software from source, how to say yum not to install it from package?

Does it make sense to (partially) create a conlang that you don't intend to actually use in the story?

English idiomatic equivalents of 能骗就骗 (if you can cheat, then cheat)

Perform mirror symmetry transformation of 3D model (in OBJ)

Is leaving out prefixes like "rauf", "rüber", "rein" when describing movement considered a big mistake in spoken German?

What was the point of separating stdout and stderr?

Russian equivalents of 能骗就骗 (if you can cheat, then cheat)

How would one prevent political gerrymandering?

German idiomatic equivalents of 能骗就骗 (if you can cheat, then cheat)

Word ending in "-ine" for rat-like



How to compute mean average precision?


How to merge two dictionaries in a single expression?How do I efficiently iterate over each entry in a Java Map?How do I sort a list of dictionaries by a value of the dictionary?How can I initialise a static Map?How do I sort a dictionary by value?How to remove a key from a Python dictionary?Yolo v1 bounding boxes during training stepUsing pure numpy metric as metric in Keras/TensorFlowConfusions regarding RetinaNetIoU. How can I calculate the true positive rate for an object detection algorithm, where I can have multiple objects per image?













0















I found this code at:
https://www.kaggle.com/chenyc15/mean-average-precision-metric



I try how to compute mAP. I have 2 question:



1.What is 'scores' ? Is it a IoU result?
scores: length N numpy array of scores associated with predicted bboxes



  1. I thought that mAP is computed using only one threshold, but there is a lot of threshold. Why?

def map_iou(boxes_true, boxes_pred, scores, thresholds = [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75]):

Mean average precision at differnet intersection over union (IoU) threshold

input:
boxes_true: Mx4 numpy array of ground true bounding boxes of one image.
bbox format: (x1, y1, w, h)
boxes_pred: Nx4 numpy array of predicted bounding boxes of one image.
bbox format: (x1, y1, w, h)
scores: length N numpy array of scores associated with predicted bboxes
thresholds: IoU shresholds to evaluate mean average precision on
output:
map: mean average precision of the image


# According to the introduction, images with no ground truth bboxes will not be
# included in the map score unless there is a false positive detection (?)

# return None if both are empty, don't count the image in final evaluation (?)
if len(boxes_true) == 0 and len(boxes_pred) == 0:
return None

assert boxes_true.shape[1] == 4 or boxes_pred.shape[1] == 4, "boxes should be 2D arrays with shape[1]=4"
if len(boxes_pred):
assert len(scores) == len(boxes_pred), "boxes_pred and scores should be same length"
# sort boxes_pred by scores in decreasing order
boxes_pred = boxes_pred[np.argsort(scores)[::-1], :]

map_total = 0

# loop over thresholds
for t in thresholds:
matched_bt = set()
tp, fn = 0, 0
for i, bt in enumerate(boxes_true):
matched = False
for j, bp in enumerate(boxes_pred):
miou = iou(bt, bp)
if miou >= t and not matched and j not in matched_bt:
matched = True
tp += 1 # bt is matched for the first time, count as TP
matched_bt.add(j)
if not matched:
fn += 1 # bt has no match, count as FN

fp = len(boxes_pred) - len(matched_bt) # FP is the bp that not matched to any bt
m = tp / (tp + fn + fp)
map_total += m

return map_total / len(thresholds)









share|improve this question


























    0















    I found this code at:
    https://www.kaggle.com/chenyc15/mean-average-precision-metric



    I try how to compute mAP. I have 2 question:



    1.What is 'scores' ? Is it a IoU result?
    scores: length N numpy array of scores associated with predicted bboxes



    1. I thought that mAP is computed using only one threshold, but there is a lot of threshold. Why?

    def map_iou(boxes_true, boxes_pred, scores, thresholds = [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75]):

    Mean average precision at differnet intersection over union (IoU) threshold

    input:
    boxes_true: Mx4 numpy array of ground true bounding boxes of one image.
    bbox format: (x1, y1, w, h)
    boxes_pred: Nx4 numpy array of predicted bounding boxes of one image.
    bbox format: (x1, y1, w, h)
    scores: length N numpy array of scores associated with predicted bboxes
    thresholds: IoU shresholds to evaluate mean average precision on
    output:
    map: mean average precision of the image


    # According to the introduction, images with no ground truth bboxes will not be
    # included in the map score unless there is a false positive detection (?)

    # return None if both are empty, don't count the image in final evaluation (?)
    if len(boxes_true) == 0 and len(boxes_pred) == 0:
    return None

    assert boxes_true.shape[1] == 4 or boxes_pred.shape[1] == 4, "boxes should be 2D arrays with shape[1]=4"
    if len(boxes_pred):
    assert len(scores) == len(boxes_pred), "boxes_pred and scores should be same length"
    # sort boxes_pred by scores in decreasing order
    boxes_pred = boxes_pred[np.argsort(scores)[::-1], :]

    map_total = 0

    # loop over thresholds
    for t in thresholds:
    matched_bt = set()
    tp, fn = 0, 0
    for i, bt in enumerate(boxes_true):
    matched = False
    for j, bp in enumerate(boxes_pred):
    miou = iou(bt, bp)
    if miou >= t and not matched and j not in matched_bt:
    matched = True
    tp += 1 # bt is matched for the first time, count as TP
    matched_bt.add(j)
    if not matched:
    fn += 1 # bt has no match, count as FN

    fp = len(boxes_pred) - len(matched_bt) # FP is the bp that not matched to any bt
    m = tp / (tp + fn + fp)
    map_total += m

    return map_total / len(thresholds)









    share|improve this question
























      0












      0








      0








      I found this code at:
      https://www.kaggle.com/chenyc15/mean-average-precision-metric



      I try how to compute mAP. I have 2 question:



      1.What is 'scores' ? Is it a IoU result?
      scores: length N numpy array of scores associated with predicted bboxes



      1. I thought that mAP is computed using only one threshold, but there is a lot of threshold. Why?

      def map_iou(boxes_true, boxes_pred, scores, thresholds = [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75]):

      Mean average precision at differnet intersection over union (IoU) threshold

      input:
      boxes_true: Mx4 numpy array of ground true bounding boxes of one image.
      bbox format: (x1, y1, w, h)
      boxes_pred: Nx4 numpy array of predicted bounding boxes of one image.
      bbox format: (x1, y1, w, h)
      scores: length N numpy array of scores associated with predicted bboxes
      thresholds: IoU shresholds to evaluate mean average precision on
      output:
      map: mean average precision of the image


      # According to the introduction, images with no ground truth bboxes will not be
      # included in the map score unless there is a false positive detection (?)

      # return None if both are empty, don't count the image in final evaluation (?)
      if len(boxes_true) == 0 and len(boxes_pred) == 0:
      return None

      assert boxes_true.shape[1] == 4 or boxes_pred.shape[1] == 4, "boxes should be 2D arrays with shape[1]=4"
      if len(boxes_pred):
      assert len(scores) == len(boxes_pred), "boxes_pred and scores should be same length"
      # sort boxes_pred by scores in decreasing order
      boxes_pred = boxes_pred[np.argsort(scores)[::-1], :]

      map_total = 0

      # loop over thresholds
      for t in thresholds:
      matched_bt = set()
      tp, fn = 0, 0
      for i, bt in enumerate(boxes_true):
      matched = False
      for j, bp in enumerate(boxes_pred):
      miou = iou(bt, bp)
      if miou >= t and not matched and j not in matched_bt:
      matched = True
      tp += 1 # bt is matched for the first time, count as TP
      matched_bt.add(j)
      if not matched:
      fn += 1 # bt has no match, count as FN

      fp = len(boxes_pred) - len(matched_bt) # FP is the bp that not matched to any bt
      m = tp / (tp + fn + fp)
      map_total += m

      return map_total / len(thresholds)









      share|improve this question














      I found this code at:
      https://www.kaggle.com/chenyc15/mean-average-precision-metric



      I try how to compute mAP. I have 2 question:



      1.What is 'scores' ? Is it a IoU result?
      scores: length N numpy array of scores associated with predicted bboxes



      1. I thought that mAP is computed using only one threshold, but there is a lot of threshold. Why?

      def map_iou(boxes_true, boxes_pred, scores, thresholds = [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75]):

      Mean average precision at differnet intersection over union (IoU) threshold

      input:
      boxes_true: Mx4 numpy array of ground true bounding boxes of one image.
      bbox format: (x1, y1, w, h)
      boxes_pred: Nx4 numpy array of predicted bounding boxes of one image.
      bbox format: (x1, y1, w, h)
      scores: length N numpy array of scores associated with predicted bboxes
      thresholds: IoU shresholds to evaluate mean average precision on
      output:
      map: mean average precision of the image


      # According to the introduction, images with no ground truth bboxes will not be
      # included in the map score unless there is a false positive detection (?)

      # return None if both are empty, don't count the image in final evaluation (?)
      if len(boxes_true) == 0 and len(boxes_pred) == 0:
      return None

      assert boxes_true.shape[1] == 4 or boxes_pred.shape[1] == 4, "boxes should be 2D arrays with shape[1]=4"
      if len(boxes_pred):
      assert len(scores) == len(boxes_pred), "boxes_pred and scores should be same length"
      # sort boxes_pred by scores in decreasing order
      boxes_pred = boxes_pred[np.argsort(scores)[::-1], :]

      map_total = 0

      # loop over thresholds
      for t in thresholds:
      matched_bt = set()
      tp, fn = 0, 0
      for i, bt in enumerate(boxes_true):
      matched = False
      for j, bp in enumerate(boxes_pred):
      miou = iou(bt, bp)
      if miou >= t and not matched and j not in matched_bt:
      matched = True
      tp += 1 # bt is matched for the first time, count as TP
      matched_bt.add(j)
      if not matched:
      fn += 1 # bt has no match, count as FN

      fp = len(boxes_pred) - len(matched_bt) # FP is the bp that not matched to any bt
      m = tp / (tp + fn + fp)
      map_total += m

      return map_total / len(thresholds)






      dictionary deep-learning






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 25 at 15:42









      supermariosupermario

      15 bronze badges




      15 bronze badges




















          0






          active

          oldest

          votes










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55341492%2fhow-to-compute-mean-average-precision%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes




          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







          Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55341492%2fhow-to-compute-mean-average-precision%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript