Celery Jobs saves Data wrong or to muchDjango - Set Up A Scheduled Job?Why can't Python parse this JSON data?Python+Celery: Chaining jobs?Retrieve list of tasks in a queue in CeleryWhy is reading lines from stdin much slower in C++ than Python?Save plot to image file instead of displaying it using MatplotlibHow do I write JSON data to a file?“Large data” work flows using pandasJobs firing to wrong celerycelery: Dispatching lots of jobs at once

Grid/table with lots of buttons

How can I make sure my players' decisions have consequences?

Should I leave my PhD after 3 years with a Masters?

Can two figures have the same area, perimeter, and same number of segments have different shape?

What should I say when a company asks you why someone (a friend) who was fired left?

Current relevance: "She has broken her leg" vs. "She broke her leg yesterday"

Book about young girl who ends up in space after apocolypse

kids pooling money for Lego League and taxes

No-cloning theorem does not seem precise

Character is called by their first initial. How do I write it?

Is an easily guessed plot twist a good plot twist?

How can I stop myself from micromanaging other PCs' actions?

Why are angular mometum and angular velocity not necessarily parallel, but linear momentum and linear velocity are always parallel?

If my business card says 〇〇さん, does that mean I'm referring to myself with an honourific?

Are glider winch launches rarer in the USA than in the rest of the world? Why?

What is the difference between 1/3, 1/2, and full casters?

How can I prevent corporations from growing their own workforce?

How important is a good quality camera for good photography?

Why do websites not use the HaveIBeenPwned API to warn users about exposed passwords?

What do I do when a student working in my lab "ghosts" me?

Where is this photo of a group of hikers taken? Is it really in the Ural?

How did C64 games handle music during gameplay?

How can I create a shape in Illustrator which follows a path in descending order size?

What to do when you reach a conclusion and find out later on that someone else already did?



Celery Jobs saves Data wrong or to much


Django - Set Up A Scheduled Job?Why can't Python parse this JSON data?Python+Celery: Chaining jobs?Retrieve list of tasks in a queue in CeleryWhy is reading lines from stdin much slower in C++ than Python?Save plot to image file instead of displaying it using MatplotlibHow do I write JSON data to a file?“Large data” work flows using pandasJobs firing to wrong celerycelery: Dispatching lots of jobs at once






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















i have a celery job to fetch some Crypto-Rates-Data from coinmarketcap.com.
If the job as been triggert several times, i dont have 100 results at my Database, i always got 101-108 records. Why that?



tasks.py



def get_exchange_rate():
api_url = "https://api.coinmarketcap.com/v1/ticker/?limit=100"
try:
exchange_rates = requests.get(api_url).json()
for exchange_rate in exchange_rates:
CryptoPrices.objects.update_or_create(
key=exchange_rate['id'],
symbol=exchange_rate['symbol'],
defaults=
"market_cap_usd": round(float(exchange_rate['market_cap_usd']), 3),
"volume_usd_24h": round(float(exchange_rate['24h_volume_usd']), 3),
"value": round(float(exchange_rate['price_usd']), 2)
)
logger.info("Crypto rate(s) updated successfully.")
except Exception as e:
print(e)


is there any way of limiting the max entries at the DB for this table?
In the end i want to have exactly 100



regards










share|improve this question




























    0















    i have a celery job to fetch some Crypto-Rates-Data from coinmarketcap.com.
    If the job as been triggert several times, i dont have 100 results at my Database, i always got 101-108 records. Why that?



    tasks.py



    def get_exchange_rate():
    api_url = "https://api.coinmarketcap.com/v1/ticker/?limit=100"
    try:
    exchange_rates = requests.get(api_url).json()
    for exchange_rate in exchange_rates:
    CryptoPrices.objects.update_or_create(
    key=exchange_rate['id'],
    symbol=exchange_rate['symbol'],
    defaults=
    "market_cap_usd": round(float(exchange_rate['market_cap_usd']), 3),
    "volume_usd_24h": round(float(exchange_rate['24h_volume_usd']), 3),
    "value": round(float(exchange_rate['price_usd']), 2)
    )
    logger.info("Crypto rate(s) updated successfully.")
    except Exception as e:
    print(e)


    is there any way of limiting the max entries at the DB for this table?
    In the end i want to have exactly 100



    regards










    share|improve this question
























      0












      0








      0








      i have a celery job to fetch some Crypto-Rates-Data from coinmarketcap.com.
      If the job as been triggert several times, i dont have 100 results at my Database, i always got 101-108 records. Why that?



      tasks.py



      def get_exchange_rate():
      api_url = "https://api.coinmarketcap.com/v1/ticker/?limit=100"
      try:
      exchange_rates = requests.get(api_url).json()
      for exchange_rate in exchange_rates:
      CryptoPrices.objects.update_or_create(
      key=exchange_rate['id'],
      symbol=exchange_rate['symbol'],
      defaults=
      "market_cap_usd": round(float(exchange_rate['market_cap_usd']), 3),
      "volume_usd_24h": round(float(exchange_rate['24h_volume_usd']), 3),
      "value": round(float(exchange_rate['price_usd']), 2)
      )
      logger.info("Crypto rate(s) updated successfully.")
      except Exception as e:
      print(e)


      is there any way of limiting the max entries at the DB for this table?
      In the end i want to have exactly 100



      regards










      share|improve this question














      i have a celery job to fetch some Crypto-Rates-Data from coinmarketcap.com.
      If the job as been triggert several times, i dont have 100 results at my Database, i always got 101-108 records. Why that?



      tasks.py



      def get_exchange_rate():
      api_url = "https://api.coinmarketcap.com/v1/ticker/?limit=100"
      try:
      exchange_rates = requests.get(api_url).json()
      for exchange_rate in exchange_rates:
      CryptoPrices.objects.update_or_create(
      key=exchange_rate['id'],
      symbol=exchange_rate['symbol'],
      defaults=
      "market_cap_usd": round(float(exchange_rate['market_cap_usd']), 3),
      "volume_usd_24h": round(float(exchange_rate['24h_volume_usd']), 3),
      "value": round(float(exchange_rate['price_usd']), 2)
      )
      logger.info("Crypto rate(s) updated successfully.")
      except Exception as e:
      print(e)


      is there any way of limiting the max entries at the DB for this table?
      In the end i want to have exactly 100



      regards







      python celery






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 26 at 16:10







      user10000033





























          1 Answer
          1






          active

          oldest

          votes


















          0














          You're relying on the ordering of the coinmarket cap feed to be consistent. Since it ranks by marketcap it will drop coins on and off the bottom of the list, and since you're using update_or_create() new entries will be created and old ones left hanging.



          If you want to track the top 100 yourself, then I'd suggest you get the top 150 and do your own/ordering filtering. Alternatively add an update time to your model (auto_now=True) and after updating delete anything over a certain age.






          share|improve this answer























          • Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

            – user10000033
            Mar 26 at 17:08











          • @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

            – Keith Bailey
            Mar 26 at 17:42











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55361625%2fcelery-jobs-saves-data-wrong-or-to-much%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown
























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          You're relying on the ordering of the coinmarket cap feed to be consistent. Since it ranks by marketcap it will drop coins on and off the bottom of the list, and since you're using update_or_create() new entries will be created and old ones left hanging.



          If you want to track the top 100 yourself, then I'd suggest you get the top 150 and do your own/ordering filtering. Alternatively add an update time to your model (auto_now=True) and after updating delete anything over a certain age.






          share|improve this answer























          • Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

            – user10000033
            Mar 26 at 17:08











          • @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

            – Keith Bailey
            Mar 26 at 17:42
















          0














          You're relying on the ordering of the coinmarket cap feed to be consistent. Since it ranks by marketcap it will drop coins on and off the bottom of the list, and since you're using update_or_create() new entries will be created and old ones left hanging.



          If you want to track the top 100 yourself, then I'd suggest you get the top 150 and do your own/ordering filtering. Alternatively add an update time to your model (auto_now=True) and after updating delete anything over a certain age.






          share|improve this answer























          • Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

            – user10000033
            Mar 26 at 17:08











          • @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

            – Keith Bailey
            Mar 26 at 17:42














          0












          0








          0







          You're relying on the ordering of the coinmarket cap feed to be consistent. Since it ranks by marketcap it will drop coins on and off the bottom of the list, and since you're using update_or_create() new entries will be created and old ones left hanging.



          If you want to track the top 100 yourself, then I'd suggest you get the top 150 and do your own/ordering filtering. Alternatively add an update time to your model (auto_now=True) and after updating delete anything over a certain age.






          share|improve this answer













          You're relying on the ordering of the coinmarket cap feed to be consistent. Since it ranks by marketcap it will drop coins on and off the bottom of the list, and since you're using update_or_create() new entries will be created and old ones left hanging.



          If you want to track the top 100 yourself, then I'd suggest you get the top 150 and do your own/ordering filtering. Alternatively add an update time to your model (auto_now=True) and after updating delete anything over a certain age.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 26 at 16:16









          Keith BaileyKeith Bailey

          2012 silver badges8 bronze badges




          2012 silver badges8 bronze badges












          • Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

            – user10000033
            Mar 26 at 17:08











          • @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

            – Keith Bailey
            Mar 26 at 17:42


















          • Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

            – user10000033
            Mar 26 at 17:08











          • @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

            – Keith Bailey
            Mar 26 at 17:42

















          Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

          – user10000033
          Mar 26 at 17:08





          Can't i simply overwrite the old ones and take only the 100 from my query at the url of the api - ?limit=100?

          – user10000033
          Mar 26 at 17:08













          @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

          – Keith Bailey
          Mar 26 at 17:42






          @venom you have 2 options if you absolutely only ever want 100 rows into the table (both are suboptimal). Delete everything before you start or track which ones are updated and delete the rest after update. A super awful way would be to use the ranking as the PK, but that will likely break your other queries in interesting ways.

          – Keith Bailey
          Mar 26 at 17:42









          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55361625%2fcelery-jobs-saves-data-wrong-or-to-much%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript