text
stringlengths
20
57.3k
labels
class label
4 classes
Title: This method creates a model with a 100% memory leak loop using model. fit() Body: I have tried various methods, but the memory is definitely leaking, it seems that the release of memory cannot keep up. Through the logs, it can be found that there is periodic memory recycling, but with the increase of time, there is still a clear upward trend Name: keras Version: 3.6.0 Please find the [gist](https://colab.sandbox.google.com/gist/Venkat6871/30fb12f2188a7826e2c649bbd945dbda/80753_tf_2-18-0-nightly-v.ipynb) here for your reference. https://github.com/tensorflow/tensorflow/issues/80753#issuecomment-2503203801
2hard
Title: [feat] support returning images Body: Marker sometimes leaves like `![1_image_0.png](1_image_0.png)`in the text - would be great to return these images + bounding boxes as another option as well
1medium
Title: [BUG] Interpret Data meet Oops! Body: **Describe the bug** I render a bar graph, when click 'Interpret Data' meet a Oops bug. **To Reproduce** Steps to reproduce the behavior: 1. render a bar graph 2. click 'Interpret Data' on a bar, There will be a Oops Error **Expected behavior** **Screenshots** ![image](https://github.com/Kanaries/pygwalker/assets/1471864/81f1fce1-5009-4dd4-a119-61f33f6b17c0) ![image](https://github.com/Kanaries/pygwalker/assets/1471864/788bfad9-f9d0-469a-a3cd-53d0fc4780d0) **Versions** - pygwalker version: 0.4.6 - python version: 3.8 - browser: chrome latest **Additional context**
1medium
Title: what is the width_multiple? Body: ### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi, always grateful to use your YOLO. I have a question. I modified 'yolov5.yaml' by changing "s = [0.50, 0.50, 1024]" to "s =[0.44, 0.33, 1024]". I understand that the depth_multiple controls the number of Darknetbottleneck blocks. However, I don’t fully understand what width_multiple does. I know it generally affects the convolution channels, but I want to know exactly what it controls and how. Could you explain this in detail? Thanks. ### Additional _No response_
3misc
Title: Error in trying to run Example: MCMC Methods for Tall Data Body: I try to run the code in http://num.pyro.ai/en/stable/examples/covtype.html?highlight=GPU#sphx-glr-download-examples-covtype-py TypeError: Argument 'None' of type '<class 'NoneType'>' is not a valid JAX type
1medium
Title: Add a way to override the execution order of certain class-level and method-level response handlers Body: **Is your feature request related to a problem? Please describe.** Here's the original question raised by @liiight on [gitter](https://gitter.im/python-uplink/Lobby?at=5beddea66b9822140d36ce8f): > so i have a class response handler to handle request errors, which basically just does `raise_for_status()` > I have another response handler that I want to use in order to retry 404 status code via a retry lib I use > I set the 2nd response handler directly on the relevant method but it seems that the 1st one is the one that actually catches the exception > is there a way to decide on the order of those? **Describe the solution you'd like** There should be a way to specify that a particular method-level response handler should run before any or certain class-level handlers. **Additional context** Here's my response to the original question: > currently, class-level decorators run before method-level decorators, as you noticed in your usage. #72 (v0.4.1) details some of the rationale for this. Currently, Uplink doesn't give you a way to decide on the order between class-level and method-level decorators. From what I can tell, there are two existing workarounds, but both have drawbacks. First, you could make the retry response handler a class-level decorator. If you don't want all methods to be retried, the other workaround is to apply the raise_for_status decorator on each method, but this makes things more verbose.
1medium
Title: https://huggingface.co/spaces/giswqs/Streamlit Body: Exception: module 'ipyleaflet' has no attribute 'TileLayer' Traceback: File "/home/user/.local/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/home/user/app/app.py", line 2, in <module> import leafmap.foliumap as leafmap File "/home/user/.local/lib/python3.8/site-packages/leafmap/__init__.py", line 42, in <module> raise Exception(e)
1medium
Title: Selemiumbase Driver switches to another tab Body: After opening those three tabs, the driver switches from the third tab to the second one or the first one. It doesn't happen with the selenium driver nor the uc_chromedriver. I managed to reproduce the bug with the following code: happened on Python 3.9 & 3.11.6 `from seleniumbase import Driver` `AMAZON_PRIME_LOGIN = "https://www.amazon.com/ap/signin?openid.pape.max_auth_age=3600&openid.return_to=https%3A%2F%2Fgaming.amazon.com%2Fprime%2Fsignup%2Feg%3Fingress%3Damzn%26ref_%3Dsm_w_ics_m_f_all&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=amzn_respawn_desktop_us&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&ssoResponse=eyJ6aXAiOiJERUYiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiQTI1NktXIn0.yf9Wgo4I2tZaftdNDV9dGzDE3WBRtCwlofy9T0xdJFn6Z8J9GkkQ2A.YfhrqNQaRSrDgXpJ.5jj055CVEHpYJa2zcCUKxxPSxxcSeVjvQFpUjEP-_kOek_h1S8Zy6jujXVJSGJtsliAleSPGnrlvysESKkSEXnAWFOvJRcE9JepYQJulvu"` `AMAZON_PRIME_GAMING = "https://gaming.amazon.com/prime-gaming-capsule-nov-23/dp/amzn1.pg.item.fe075900-6304-4e90-a13d-d5e04635dca9?ingress=amzn&ref_=SM_LeagueofLegends_S13_D09_CRWN"` `AMAZON_NUMBER_SETTING = "https://www.amazon.com/ap/profile/mobilephone?ref_=ax_am_landing_add_mobile&openid.assoc_handle=usflex&referringAppAction=CNEP"` `driver = Driver(uc= True)` `driver.get(AMAZON_PRIME_LOGIN)` `driver.switch_to.new_window('tab')` `driver.get(AMAZON_PRIME_GAMING)` `driver.switch_to.new_window('tab')` `driver.get(AMAZON_NUMBER_SETTING)` `input()` `driver.quit()`
1medium
Title: How to call apis(upgrade, current) in python script Body: I have a database `db`. I want to judge if `flask_migrate` has created tables in `db`. If not, `upgrade db`. There are commands, but no examples about calling `migrate, upgrade` in python script. The test files in flask_migrate also run commands, such as: `(o, e, s) = run_cmd('python app.py db migrate')`
1medium
Title: vocabulary set_embedding(glove) maps all OOV terms to the same vector if no lookup provided Body: When attaching the pre-trained embedding to a built vocab, namely, `vocab.set_embedding(glove_em)`, by design, all of the vocabulary that did not present in the pre-trained embedding will be mapped to the `<'unk'>` using init_unknown_vec (default `nd.zeros`). This is a bit awkward since the `<'unk'>` was defined in building vocabulary already (e.g., using a threshold of mininal term frequency), and one may expect all of these OOV terms are at least mapped to different random vectors instead of the same vector. Of course, when using FastText and enable the ngram or providing `unknown_lookup` could resolve it. However, as the default behavior, it is still a bit counter-converntion. ``` text_data = "Computing-Tabulating-Recording \\n affective-motivational \\n teacher-dealers" counter = gluonnlp.data.count_tokens(text_data) vocab = gluonnlp.Vocab(counter, unknown_token='<unk>', min_freq=self.min_freq) em = gluonnlp.embedding.create('GloVe', unknown_token='<unk>', source='glove.840B.300d', allow_extend=True, init_unknown_vec=nd.random.uniform) vocab.set_embedding(em) ```
1medium
Title: dump-model-params.py fail for Fasterrcnn horovod checkpt Body: If you're asking about an unexpected problem you met, use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__: ### 1. What you did: (1) **If you're using examples, what's the command you run:** dump-model-params.py --meta mymeta myinputchpt outputnpz.npz (2) **If you're using examples, have you made any changes to the examples? Paste `git diff` here:** (3) **If not using examples, tell us what you did:** Note that we may not be able to investigate it if there is no reproducible code. It's always better to paste what you did instead of describing them. ### 2. What you observed: (1) **Include the ENTIRE logs here:** Traceback (most recent call last): File "dump-model-params.py", line 25, in <module> tf.train.import_meta_graph(args.meta, clear_devices=True) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1674, in import_meta_graph meta_graph_or_file, clear_devices, import_scope, **kwargs)[0] File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1696, in _import_meta_graph_with_return_elements **kwargs)) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements return_elements=return_elements) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 391, in import_graph_def _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def) File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 158, in _RemoveDefaultAttrs op_def = op_dict[node.op] KeyError: 'HorovodAllreduce' It's always better to paste what you observed instead of describing them. It's always better to paste **as much as possible**, although sometimes a partial log is OK. Tensorpack typically saves stdout to its training log. If stderr is relevant, you can run a command with `CMD 2>&1 | tee logs.txt` to save both stdout and stderr to one file. (2) **Other observations, if any:** For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue. ### 3. What you expected, if not obvious. If you expect higher speed, please first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html If you expect higher accuracy, only in one of the two conditions can we help with it: (1) You're unable to match the accuracy documented in tensorpack examples. (2) It appears to be a tensorpack bug. Otherwise, how to get high accuracy is a machine learning question and is not our responsibility to figure out. ### 4. Your environment: + Python version: + TF version: `python -c 'import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)'`. + Tensorpack version: `python -c 'import tensorpack; print(tensorpack.__version__);'`. You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git` and see if your issue is already solved. + If you're not using tensorpack under a normal command line shell (e.g., using an IDE or jupyter notebook), please retry under a normal command line shell. + Hardware information, e.g. number of GPUs used. Feel free to add extra information related to your issue, but please try to provide the above information __accurately__ to save effort in the investigation.
1medium
Title: What does the right Y axis ticks in missingno represent? Body: According to the example run by the author in the docs, I thought Y axis represented the number of entries according to the percentage in the right Y axis. However, values are way discrepant in my case: ![screenshot_20180717_094403](https://user-images.githubusercontent.com/13151704/42817974-093674f0-89a6-11e8-92a8-b6dd2203ac37.png) Could anyone explain what the right Y axis ticks really represent?
1medium
Title: [BUG] No detailed information in testing error Body: ### Description <!--- Describe your issue/bug/request in detail --> The recent nightly build logs (https://github.com/recommenders-team/recommenders/actions/runs/10335387467/job/28609908093) don't provide sufficient information about the error. ![image](https://github.com/user-attachments/assets/c8a116d9-3fe6-4a0f-8454-24d9b264945f) ### In which platform does it happen? <!--- Describe the platform where the issue is happening (use a list if needed) --> <!--- For example: --> <!--- * Azure Data Science Virtual Machine. --> <!--- * Azure Databricks. --> <!--- * Other platforms. --> In the testing workflow. ### How do we replicate the issue? <!--- Please be specific as possible (use a list if needed). --> <!--- For example: --> <!--- * Create a conda environment for pyspark --> <!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` --> <!--- * ... --> ### Expected behavior (i.e. solution) <!--- For example: --> <!--- * The tests for SAR PySpark should pass successfully. --> ### Willingness to contribute <!--- Go over all the following points, and put an `x` in the box that apply. --> - [x] Yes, I can contribute for this issue independently. - [ ] Yes, I can contribute for this issue with guidance from Recommenders community. - [ ] No, I cannot contribute at this time. ### Other Comments
1medium
Title: [Train] Crash at end of training Body: ### What happened + What you expected to happen Recently I've been staring to experience a crash at the end of training. The backtrace is always the same: ``` Training completed after 1 iterations at 2025-03-05 04:39:56. Total running time: 8min 51s 2025-03-05 04:39:56,506 INFO tune.py:1009 -- Wrote the latest version of all result files and experiment state to 'earthdaily-pathfinders-scaleai/venus/afm-profiling/train/experimental-2025-03-05_04-31-02_a458' in 0.5035s. (TorchTrainer pid=439, ip=10.212.157.221) *** SIGSEGV received at time=1741149596 on cpu 63 *** (TorchTrainer pid=439, ip=10.212.157.221) PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6ee7050 1824 (unknown) (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8962 96 boost::asio::io_context::run() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService() (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5d1d4e0 64 thread_proxy (TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6f341c4 (unknown) (unknown) (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: *** SIGSEGV received at time=1741149596 on cpu 63 *** (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b6ee7050 1824 (unknown) (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8962 96 boost::asio::io_context::run() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService() (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b5d1d4e0 64 thread_proxy (TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b6f341c4 (unknown) (unknown) (TorchTrainer pid=439, ip=10.212.157.221) Fatal Python error: Segmentation fault (TorchTrainer pid=439, ip=10.212.157.221) (TorchTrainer pid=439, ip=10.212.157.221) (TorchTrainer pid=439, ip=10.212.157.221) Extension modules: msgpack._cmsgpack, google._upb._message, psutil._psutil_linux, psutil._psutil_posix, setproctitle, yaml._yaml, charset_normalizer.md, ray._raylet, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.tslib, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pandas._libs.ops, pyarrow._compute, bottleneck.move, bottleneck.nonreduce, bottleneck.nonreduce_axis, bottleneck.reduce, pandas._libs.arrays, pandas._libs.index, pandas._libs.join, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.internals, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.tslibs.strptime, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, pyarrow._parquet, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, pyarrow._json, lazy_object_proxy.cext, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, _cffi_backend, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg._matfuncs_expm, scipy.linalg._linalg_pythran, scipy.linalg.cython_blas, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, sklearn.__check_build._check_build, sklearn.utils.murmurhash, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._cython_nnls, scipy._lib._uarray._uarray, scipy.linalg._decomp_interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._dierckx, scipy.interpolate._ppoly, scipy.interpolate._interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.interpolate._bspl, scipy.special.cython_special, scipy.stats._stats, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._biasedurn, scipy.stats._stats_pythran, scipy.stats._levy_stable.levyst, scipy.stats._ansari_swilk_statistics, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.ndimage._nd_image, scipy.ndimage._rank_filter_1d, _ni_label, scipy.ndimage._ni_label, sklearn.utils._openmp_helpers, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.utils._typedefs, sklearn.utils._readonly_array_wrapper, sklearn.metrics._dist_metrics, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.utils._cython_blas, sklearn.utils._heap, sklearn.utils._sorting, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction, sklearn.metrics._pairwise_fast, sklearn.utils._random, markupsafe._speedups, scipy.fftpack.convolve, tornado.speedups, greenlet._greenlet (total: 228) ``` I am experiencing that regardless of the number of workers I use (one or multiple). I am always using the DDP strategy though. This is how I am initializing the PyTorch Lightning trainer in my training loop: ``` trainer = Trainer( strategy=ray.train.lightning.RayDDPStrategy(), plugins=[ray.train.lightning.RayLightningEnvironment()], ``` Beyond that, I'm not sure what information would be relevant, but I am happy to provide more info about the way I am running my training jobs upon request. The same backtrace has been reported before in a comment on [this issue](https://github.com/ray-project/ray/issues/49998), however the original description of that issue seems unrelated, so I am creating a new issue here. ### Versions / Dependencies Ray: 2.43.0 I think I've only experienced this with Ray 2.43.0 and not in an older version. ### Reproduction script This is hard to reproduce - it happens occasionally at the end of training. ### Issue Severity Medium: It is a significant difficulty but I can work around it.
2hard
Title: [Feature Request] Optionally pass errors raised from a callback specific error handler to the global error handler Body: Thanks so much for your interest in Dash! Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :) **Is your feature request related to a problem? Please describe.** Currently if I define `on_error` for both the `Dash()` class and a for a specific callback, the callback specific `on_error` overwrites the global `on_error`. It would be nice to have an option to chain these error handlers, so that exceptions raised from the callback specific `on_error` are passed on to the global `on_error`. Example I can think of: - callback specific on_error: handles user incorrectly filling in data - global on_error: catches unexpected errors (ie. bugs) and notifies the developer **Describe the solution you'd like** Perhaps a `bool` argument to the `callback()` decorator that would enable passing uncaught exceptions from the local `on_error` to the global `on_error`, and the same argument to the `Dash()` class which would be used as a default for all callbacks? **Describe alternatives you've considered** - Wrapping callback specific on_error in a try catch block and calling the global error handler manually. - Wrapping the body of a callback in a try catch block and calling the callback specific on_error manually. I think both of these approaches are unnecessary boiler plate. **Additional context** Add any other context or screenshots about the feature request here.
1medium
Title: Any plan to support Slack bot? Body: Hi, Kindly ask if you could consider adding support for Slack bot
3misc
Title: like模式批量下载报错 Body: **同一个账号在post模式可以正常下载但是like模式报错:** 批量下载直接回车,单一视频下载直接粘贴视频链接: ----读取配置完成---- ----为您下载多个视频---- ----用户的sec_id=MS4wLjABAAAAs_Dkw8_CynCMjVN601UkAa8M3TGfDgLJqYUn2tKeyy_iEDch9ifarviMtWSjD4qN---- --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-2-6256bfbbea5e> in <module> 1 import TikTokMulti as MTK 2 ----> 3 MTK.TikTok() 4 5 #单视频下载 ~\Desktop\TikTokDownload-main\TikTokMulti.py in __init__(self) 99 100 print('----读取配置完成----\r') --> 101 self.judge_link() 102 103 def out_Print(self): ~\Desktop\TikTokDownload-main\TikTokMulti.py in judge_link(self) 146 response = requests.get(url = api_post_url,headers = self.headers) 147 html = json.loads(response.content.decode()) --> 148 self.nickname = html['aweme_list'][0]['author']['nickname'] 149 if not os.path.exists(self.save + self.mode + "\\" + self.nickname): 150 os.makedirs(self.save + self.mode + "\\" + self.nickname) IndexError: list index out of range
1medium
Title: Feature request: Support shapes Body: Hi, as you suggested in #107 I'm opening an issue for this. Other issue are quite old, I understand that some infrastructure are here to support shape, do you have updated stuff in the code to help supporting shapes. This shapes support might not be a great place for a first contrib :-/
2hard
Title: Datasets that fail DataFrameModel validation unexpectedly alter field properties Body: **Describe the bug** When a dataframe fails validation, it can corrupts the state of the `coerce` attribute for a particular field. In my case, a field is defined as coerce=True, but after trying to validate the bad dataset, the field now has coerce=False. - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandera. - [ ] (optional) I have confirmed this bug exists on the main branch of pandera. **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug. #### Code Sample, a copy-pastable example ```python import pandas as pd import pandera as pa from pandera.typing import Series class Table(pa.DataFrameModel): """Simple table with 2 columns.""" chr: Series[str] = pa.Field(nullable=False, description="Chromosome", str_length=dict(min_value=1), coerce=True) start: Series[int] = pa.Field(nullable=False, ge=0, description="0-based inclusive start position of region") assert Table.to_schema().columns["chr"].coerce # Passes as expected Table.validate(pd.DataFrame({"chr": ["chr1", "chr2"], "start": [0, 10]})) assert Table.to_schema().columns["chr"].coerce # Still passes as expected try: Table.validate(pd.DataFrame({"chr": ["", "chr1"], "start": [0, 10]})) raise AssertionError("Dataframe should fail validation as str_length constraint not met") except pa.errors.SchemaError: ... # Unexpectedly fails. coerce is now False for this Field. # Failed validation essentially corrupted the state of the class assert Table.to_schema().columns["chr"].coerce ``` #### Expected behavior The state of a DataFrameModel should not be changed by failed dataframe manipulation. #### Additional context I believe the problem is caused by these lines: https://github.com/unionai-oss/pandera/blob/main/pandera/backends/pandas/container.py#L221-L223 The coerce attribute of the field is being changed during dataframe validation, and is intended to be restored once validation completes. But if an exception is raised, the attributes are not properly reverted as the code just jumps to the except blocks. A simple fix is just to move these reversion lines outside of the try-except block, after the last of of the exception blocks (so that reversion is always applied regardless of validation success). Alternatively, perhaps it's cleaner to deepcopy the schema_component to avoid the complicated logic regarding changing and reverting these various attributes.
1medium
Title: 03_linear_regression_sol.py can't run with errors as follows Body: Traceback (most recent call last): File "D:/stanford_tensorflow_tutorials/tf_oreilly/01_linear_regression_sol.py", line 19, in <module> book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8") File "C:\Anaconda3\lib\site-packages\xlrd\__init__.py", line 441, in open_workbook ragged_rows=ragged_rows, File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 107, in open_workbook_xls bk.fake_globals_get_sheet() File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 687, in fake_globals_get_sheet self.get_sheets() File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 678, in get_sheets self.get_sheet(sheetno) File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 669, in get_sheet sh.read(self) File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1475, in read self.update_cooked_mag_factors() File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1543, in update_cooked_mag_factors elif not (10 <= zoom <= 400): TypeError: unorderable types: int() <= NoneType()
1medium
Title: dask-expr increasing calculation time Body: Hi guys! I would like to report a possible bug on dask-expr. Runing this code with dask-expr is tooking ~10 seconds on my machine Now, running without dask-expr took ~2 secods I´ve attached a sample csv file as data sample. [a (1).csv](https://github.com/user-attachments/files/17492243/a.1.csv) ``` import dask.dataframe as dd import pandas as pd df = dd.read_csv('a (1).csv') df.head() novas_colunas = [ 'CD_5_AJST_PVS', 'CD_5_DT_CTB_AJST_PVS', 'CD_6_RCBT_RC', 'CD_6_DT_CTB_RCBT_RC', 'CD_7_RVSA_RCBT_RC', 'CD_7_DT_CTB_RVSA_RCBT_RC', 'CD_8_PGTO_CBAC', 'CD_9_RVSA_PGTO_CBAC' ] for coluna in novas_colunas: df[coluna] = pd.Series(dtype='datetime64[ns]') df['CD_5_AJST_PVS'] = df['CD_5_AJST_PVS'].mask(cond=(df['codigoTipoEvento'] == 5), other=df['dataHoraEvento']) df['CD_5_DT_CTB_AJST_PVS'] = df['CD_5_DT_CTB_AJST_PVS'].mask(cond=(df['codigoTipoEvento'] == 5), other=df['dataHoraEvento']) df['CD_6_RCBT_RC'] = df['CD_6_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df['CD_6_DT_CTB_RCBT_RC'] = df['CD_6_DT_CTB_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df['CD_7_RVSA_RCBT_RC'] = df['CD_7_RVSA_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df['CD_7_DT_CTB_RVSA_RCBT_RC'] = df['CD_7_DT_CTB_RVSA_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df['CD_8_PGTO_CBAC'] = df['CD_8_PGTO_CBAC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df['CD_9_RVSA_PGTO_CBAC'] = df['CD_9_RVSA_PGTO_CBAC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento']) df = df.drop(columns = ['codigoTipoEvento', 'dataHoraEvento']) df.compute() ``` **Environment**: - Dask version: 2024.10.0 - dask-expr : 1.1.16 - Python version: 3.11 - Operating System: Windows - Install method (conda, pip, source): Pip
1medium
Title: token_type_id has only two possible value, yes? Body: I read the code, found that in the comments, token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]]) But I think token_type_ids can only be 1 or 0 , am I wrong
1medium
Title: Allredue hang in yolo3 train using multi machine Body: **Environment:** 1. Framework: MXNet 2. Framework version:1.6.0.post0 3. Horovod version:0.22.0 4. MPI version:3.1.0 5. CUDA version:10.2 6. NCCL version:2.9.6 7. Python version:3.6 8. Spark / PySpark version: 9. Ray version: 10. OS and version: Ubuntu 18.04.5 LTS \n \l 11. GCC version:7.5.0 12. CMake version:3.20.2 **Checklist:** 1. Did you search issues to find if somebody asked this question before? 2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? 3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? 4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? **Bug report:** Please describe erroneous behavior you're observing and steps to reproduce it. Hi, all I got warning like this. When I am using 2 node and 2 GPUs for distributed training, there are some infos following: envs: ``` docker image : gluonai/gluon-cv:gpu-latest python:3.6 mxnet-cu102:1.6.0.post0 horovod:0.22.0 ``` run_multi_node.sh in gluon-cv dirs is: ``` export CUDA_VISIBLE_DEVICES=2 export NCCL_DEBUG=info export NCCL_SOCKET_IFNAME=eth0 export NCCL_IB_DISABLE=1 rm -rf log mkdir log export MXNET_CUDNN_AUTOTUNE_DEFAULT=0 export MXNET_EXEC_ENABLE_ADDTO=1 python3 ./scripts/detection/yolo/train_yolo3.py \ --network darknet53 \ --dataset=coco \ --batch-size=4 \ --horovod \ --num-workers 8 \ --log-interval 10 \ --lr-decay-epoch 220,250 \ --epochs 280 \ --warmup-epochs 2 \ --mixup \ --no-mixup-epochs 20 \ --label-smooth --no-wd \ --save-interval 1 \ --val-interval 1 \ --syncbn \ --save-prefix log/ ``` and logs is: ``` root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/gluon-cv# /usr/bin/mpirun bash run_multi_node.sh Mon May 24 06:42:09 2021[1,1]<stdout>:loading annotations into memory... Mon May 24 06:42:09 2021[1,0]<stdout>:loading annotations into memory... Mon May 24 06:42:25 2021[1,0]<stdout>:Done (t=16.53s) Mon May 24 06:42:25 2021[1,0]<stdout>:creating index... Mon May 24 06:42:26 2021[1,0]<stdout>:index created! Mon May 24 06:42:27 2021[1,1]<stdout>:Done (t=17.78s) Mon May 24 06:42:27 2021[1,1]<stdout>:creating index... Mon May 24 06:42:27 2021[1,1]<stdout>:index created! Mon May 24 06:42:51 2021[1,0]<stdout>:loading annotations into memory... Mon May 24 06:42:52 2021[1,0]<stdout>:Done (t=0.44s) Mon May 24 06:42:52 2021[1,0]<stdout>:creating index... Mon May 24 06:42:52 2021[1,0]<stdout>:index created! Mon May 24 06:42:52 2021[1,1]<stdout>:loading annotations into memory... Mon May 24 06:42:52 2021[1,1]<stdout>:Done (t=0.44s) Mon May 24 06:42:52 2021[1,1]<stdout>:creating index... Mon May 24 06:42:52 2021[1,1]<stdout>:index created! Mon May 24 06:43:03 2021[1,0]<stderr>:INFO:root:Namespace(amp=False, batch_size=4, data_shape=416, dataset='coco', epochs=280, gpus='0', horovod=True, label_smooth=True, log_interval=10, lr=0.001, lr_decay=0.1, lr_decay_epoch='220,250', lr_decay_period=0, lr_mode='step', mixup=True, momentum=0.9, network='darknet53', no_mixup_epochs=20, no_random_shape=False, no_wd=True, num_samples=117266, num_workers=8, resume='', save_interval=1, save_prefix='log/yolo3_darknet53_coco', seed=233, start_epoch=0, syncbn=True, val_interval=1, warmup_epochs=2, warmup_lr=0.0, wd=0.0005) Mon May 24 06:43:03 2021[1,0]<stderr>:INFO:root:Start training from [Epoch 0] Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0> Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0> Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Using network Socket Mon May 24 06:43:04 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0> Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0> Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Using network Socket Mon May 24 06:43:04 2021[1,1]<stderr>:INFO:root:Namespace(amp=False, batch_size=4, data_shape=416, dataset='coco', epochs=280, gpus='0', horovod=True, label_smooth=True, log_interval=10, lr=0.001, lr_decay=0.1, lr_decay_epoch='220,250', lr_decay_period=0, lr_mode='step', mixup=True, momentum=0.9, network='darknet53', no_mixup_epochs=20, no_random_shape=False, no_wd=True, num_samples=117266, num_workers=8, resume='', save_interval=1, save_prefix='log/yolo3_darknet53_coco', seed=233, start_epoch=0, syncbn=True, val_interval=1, warmup_epochs=2, warmup_lr=0.0, wd=0.0005) Mon May 24 06:43:04 2021[1,1]<stderr>:INFO:root:Start training from [Epoch 0] Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00/02 : 0 1 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01/02 : 0 1 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Setting affinity for GPU 2 to 0fffff Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Setting affinity for GPU 2 to ffffff Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00 : 1[41000] -> 0[42000] [receive] via NET/Socket/0 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 00 : 0[42000] -> 1[41000] [receive] via NET/Socket/0 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01 : 1[41000] -> 0[42000] [receive] via NET/Socket/0 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 01 : 0[42000] -> 1[41000] [receive] via NET/Socket/0 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00 : 0[42000] -> 1[41000] [send] via NET/Socket/0 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 00 : 1[41000] -> 0[42000] [send] via NET/Socket/0 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01 : 0[42000] -> 1[41000] [send] via NET/Socket/0 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 01 : 1[41000] -> 0[42000] [send] via NET/Socket/0 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Connected all rings Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Connected all trees Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO comm 0x7f440c35e630 rank 1 nranks 2 cudaDev 0 busId 41000 - Init COMPLETE Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Connected all rings Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Connected all trees Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO comm 0x7f9f9c37d480 rank 0 nranks 2 cudaDev 0 busId 42000 - Init COMPLETE Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Launch mode Parallel Mon May 24 06:43:09 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 9], LR: 1.71E-07, Speed: 19.202 samples/sec, ObjLoss=6658.178, BoxCenterLoss=20.613, BoxScaleLoss=24.689, ClassLoss=435.474 Mon May 24 06:43:12 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 19], LR: 3.41E-07, Speed: 17.696 samples/sec, ObjLoss=7650.106, BoxCenterLoss=16.439, BoxScaleLoss=18.395, ClassLoss=339.499 Mon May 24 06:43:14 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 29], LR: 5.12E-07, Speed: 19.079 samples/sec, ObjLoss=6205.702, BoxCenterLoss=18.171, BoxScaleLoss=19.766, ClassLoss=378.784 Mon May 24 06:43:17 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 39], LR: 6.82E-07, Speed: 9.979 samples/sec, ObjLoss=5282.987, BoxCenterLoss=17.172, BoxScaleLoss=18.486, ClassLoss=358.852 Mon May 24 06:47:08 2021[1,0]<stderr>:[Mon May 24 06:47:08 2021[1,0]<stderr>:2021-05-24 06:47:08.577207: W /tmp/pip-install-8i6hkcw7/horovod_78663a94386f4ebabc36dedd9415f5c2/horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock. Mon May 24 06:47:08 2021[1,0]<stderr>:Missing ranks: Mon May 24 06:47:08 2021[1,0]<stderr>:1: [horovod_allreduce.0, horovod_allreduce.1, horovod_allreduce.100, horovod_allreduce.101, horovod_allreduce.104, horovod_allreduce.105 ...] ``` It seems that problems happen in allreduce time out,but it's ok when I running demo horovod provided in multi machine. run.sh as following: ``` export NCCL_DEBUG=info export NCCL_SOCKET_IFNAME=eth0 export NCCL_IB_DISABLE=1 python3 mxnet_mnist.py ``` ``` root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# /usr/bin/mpirun bash run.sh Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False) Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False) Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:data-0/mnist.zip exists, skipping download Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:data-1/mnist.zip exists, skipping download Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:Mon May 24 07:12:46 2021[1,1]<stderr>:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28] Mon May 24 07:12:46 2021[1,0]<stderr>:[07:12:46Mon May 24 07:12:46 2021[1,0]<stderr>:] src/io/iter_mnist.cc:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28] Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=1, shape=[Mon May 24 07:12:46 2021[1,1]<stderr>:64,1,28,28] Mon May 24 07:12:47 2021[1,0]<stderr>:[07:12:47] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=Mon May 24 07:12:47 2021[1,0]<stderr>:1, shape=[64,1,28,28] Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0> Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0> Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Using network Socket Mon May 24 07:12:50 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0> Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0> Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Using network Socket Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00/02 : 0 1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01/02 : 0 1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Setting affinity for GPU 0 to ffffff Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all rings Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all trees Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all rings Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO comm 0x7ff8d8346190 rank 0 nranks 2 cudaDev 0 busId 40000 - Init COMPLETE Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all trees Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO comm 0x7f22c834adb0 rank 1 nranks 2 cudaDev 0 busId 3f000 - Init COMPLETE Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Launch mode Parallel Mon May 24 07:12:50 2021[1,0]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,0]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) Mon May 24 07:12:50 2021[1,1]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,1]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.867500 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.865469 Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.914297 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.915547 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934219 Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934010 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688 Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Speed=28557.88 samples/s Time cost=2.097635 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Train: accuracy=0.949753 Validation: accuracy=0.983373 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.984375 Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.981094 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.985078 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.981016 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.985000 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.981563 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.985117 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.982500 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:Epoch[1] Speed=40224.99 samples/s Time cost=1.489223 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:Epoch[1] Train: accuracy=0.985544 Validation: accuracy=0.986378 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.990469 Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.989219 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.990703 Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.988047 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.990417 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.988229 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.990156 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.988711 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Speed=40484.51 samples/s Time cost=1.479677 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Train: accuracy=0.990151 Validation: accuracy=0.988482 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993594 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993437 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.992266 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.993359 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.993229 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.992344 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.993203 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.992500 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Speed=34536.84 samples/s Time cost=1.734496 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Train: accuracy=0.993356 Validation: accuracy=0.989083 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.996094 Mon May 24 07:12:57 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.995469 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.995547 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.994531 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.995469 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.994375 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.995273 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.994609 Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Speed=34617.01 samples/s Time cost=1.730479 Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Train: accuracy=0.995259 Validation: accuracy=0.990084 root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# ``` It seems that allreduce time out using multi machine, but I got OK when I train demo provided by horovod. there are run.sh as following: ``` export NCCL_DEBUG=info export NCCL_SOCKET_IFNAME=eth0 export NCCL_IB_DISABLE=1 python3 mxnet_mnist.py ``` ``` root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# /usr/bin/mpirun bash run.sh Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False) Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False) Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:data-0/mnist.zip exists, skipping download Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:data-1/mnist.zip exists, skipping download Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:Mon May 24 07:12:46 2021[1,1]<stderr>:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28] Mon May 24 07:12:46 2021[1,0]<stderr>:[07:12:46Mon May 24 07:12:46 2021[1,0]<stderr>:] src/io/iter_mnist.cc:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28] Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=1, shape=[Mon May 24 07:12:46 2021[1,1]<stderr>:64,1,28,28] Mon May 24 07:12:47 2021[1,0]<stderr>:[07:12:47] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=Mon May 24 07:12:47 2021[1,0]<stderr>:1, shape=[64,1,28,28] Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0> Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0> Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Using network Socket Mon May 24 07:12:50 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0> Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1. Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0> Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Using network Socket Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00/02 : 0 1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01/02 : 0 1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Setting affinity for GPU 0 to ffffff Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [send] via NET/Socket/0 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all rings Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all trees Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all rings Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO comm 0x7ff8d8346190 rank 0 nranks 2 cudaDev 0 busId 40000 - Init COMPLETE Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all trees Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512 Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO comm 0x7f22c834adb0 rank 1 nranks 2 cudaDev 0 busId 3f000 - Init COMPLETE Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Launch mode Parallel Mon May 24 07:12:50 2021[1,0]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,0]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) Mon May 24 07:12:50 2021[1,1]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,1]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.867500 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.865469 Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.914297 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.915547 Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934219 Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934010 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688 Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Speed=28557.88 samples/s Time cost=2.097635 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Train: accuracy=0.949753 Validation: accuracy=0.983373 Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.984375 Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.981094 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.985078 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.981016 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.985000 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.981563 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.985117 Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.982500 Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:Epoch[1] Speed=40224.99 samples/s Time cost=1.489223 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:Epoch[1] Train: accuracy=0.985544 Validation: accuracy=0.986378 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.990469 Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.989219 Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.990703 Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.988047 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.990417 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.988229 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.990156 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.988711 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Speed=40484.51 samples/s Time cost=1.479677 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Train: accuracy=0.990151 Validation: accuracy=0.988482 Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993594 Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993437 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.992266 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.993359 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.993229 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.992344 Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.993203 Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.992500 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Speed=34536.84 samples/s Time cost=1.734496 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Train: accuracy=0.993356 Validation: accuracy=0.989083 Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.996094 Mon May 24 07:12:57 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.995469 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.995547 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.994531 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.995469 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.994375 Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.995273 Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.994609 Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Speed=34617.01 samples/s Time cost=1.730479 Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Train: accuracy=0.995259 Validation: accuracy=0.990084 root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# ``` Any suggestions will be highly welcome!
2hard
Title: ModuleNotFoundError: No module named 'jsonschema.compat' Body: drf_yasg==1.20.0 swagger_spec_validator==2.7.3 ```python Internal Server Error: / Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 204, in _get_response response = response.render() File "/usr/local/lib/python3.9/site-packages/django/template/response.py", line 105, in render self.content = self.rendered_content File "/usr/local/lib/python3.9/site-packages/rest_framework/response.py", line 70, in rendered_content ret = renderer.render(self.data, accepted_media_type, context) File "/usr/local/lib/python3.9/site-packages/drf_yasg/renderers.py", line 35, in render return codec.encode(data) File "/usr/local/lib/python3.9/site-packages/drf_yasg/codecs.py", line 73, in encode VALIDATORS[validator](copy.deepcopy(spec)) File "/usr/local/lib/python3.9/site-packages/drf_yasg/codecs.py", line 29, in _validate_swagger_spec_validator from swagger_spec_validator.common import SwaggerValidationError as SSVErr File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/__init__.py", line 8, in <module> from swagger_spec_validator.util import validate_spec_url File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/util.py", line 9, in <module> from swagger_spec_validator import validator12 File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/validator12.py", line 29, in <module> from swagger_spec_validator.ref_validators import default_handlers File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/ref_validators.py", line 14, in <module> from jsonschema.compat import iteritems ModuleNotFoundError: No module named 'jsonschema.compat' ```
1medium
Title: jina export kubernetes CLI should be able to export a Deploment YAML file to kubernetes Body:
1medium
Title: [🕹️]Starry-eyed Supporter Body: ### What side quest or challenge are you solving? [🕹️]Starry-eyed Supporter ### Points 150 ### Description Starred OpenBB repo ### Provide proof that you've completed the task 1. ![image](https://github.com/user-attachments/assets/b23747ef-658e-4cf3-8aae-283bcac1fedb) 2. ![image](https://github.com/user-attachments/assets/29a1f9aa-568e-4432-9ab5-8062bf3f1f08) 3. ![image](https://github.com/user-attachments/assets/6ad49d04-26ea-4983-a4c7-4fcb6a3f4aa8) 4. ![image](https://github.com/user-attachments/assets/b22eaf60-eb40-4349-8867-feb927a0c97e) 5. ![image](https://github.com/user-attachments/assets/c2578916-d8de-4a0a-844a-23a0d7739108)
0easy
Title: Need Help with Group Photo Handling Body: * face_recognition version: * Python version: 3.6 * Operating System: Windows 10 ### Description I want to compare two photos. The first has the face of one individual. The second is a group photo with many faces. I want to see if the individual from the first photo appears in the second photo. ### What I Did I tried: ``` face_locations = face_recognition.face_locations(img2_loaded) for face in face_locations: top, right, bottom, left = face face_img = img2_loaded[top:bottom, left:right] face_recognition.compare_faces(img1_loaded, face_img) ``` And I'm getting an error about operands cannot be broadcast together with shapes (3088,2316,3) (90,89,3). Any tips would be much appreciated.
1medium
Title: [BUG] Brief Description of the Issue Body: Found a bug? Please fill out the sections below. 👍 ### Describe the bug A clear and concise description of what the bug is. ### Steps to Reproduce 1. when starting the project in vscode and running it locally it just open the git local and creash the project 2. just screens open for the 2 second and wiped out ### Expected Behavior A brief description of what you expected to happen. I expected to open the file where I can write something and order it to do the task as others are doing. ### Actual Behavior: what actually happened. mentioned above ### Environment - OS: win 10 - Model Used (e.g., GPT-4v, Gemini Pro Vision): - Framework Version (optional): ### Screenshots If applicable, add screenshots to help explain your problem. ### Additional context Add any other context about the problem here.
1medium
Title: 滚动预测中的动态股池 Body: 期望实现一个特性,即滚动预测时,实现动态股池。 比如每一滚时,动态确定该滚股池为滚动日市值最大的前300只股票。
1medium
Title: [Doc]: Add uv and pixi install instructions Body: ### Documentation Link https://matplotlib.org/devdocs/#install ### Problem Since uv and pixi are shaking up the package managment landscape, they should be included in our install instructions. ### Suggested improvement _No response_
0easy
Title: run black and other linting tools on generated code Body: In a comment on #2276, @alexcjohnson suggested: > I wonder, now that we're Py3-only, maybe we should call black on the generated files as part of the generation process? We already do that in the core component packages as a separate step.
1medium
Title: Multiple instances of keypoint labels per image Body: ## 🐛 Bug I have an image that contains multiple faces and multiple landmarks. The number of faces and landmark instances is variable. The problem is that `alb.KeypointParams` allows to pass only a single instance of keypoints. ## To Reproduce ```python image = cv2.cvtColor( cv2.imread("couple.jpg"), cv2.COLOR_BGR2RGB, ) # Two instances of bounding boxes for faces boxes = ( (332, 128, 542, 424), (542, 232, 726, 498), ) # Two instances of bounding boxes for landmarks keypoints = ( [ [410.562, 223.625], [482.817, 268.089], [436.5, 286.616], [364.246, 301.438], [443.911, 344.049], ], [ [590.205, 329.531], [676.795, 337.857], [633.5, 381.152], [580.214, 417.786], [668.469, 429.442], ], ) transform = alb.Compose( bbox_params=alb.BboxParams( format="pascal_voc", label_fields=["category_ids"], ), keypoint_params=alb.KeypointParams( format="xy", ), p=1, transforms=[ alb.Resize(height=1024, width=1024, p=1), ], ) sample = transform( image=image, bboxes=boxes, category_ids=np.ones(len(boxes)), keypoints=keypoints, ) ``` This script yields the following error: ``` Traceback (most recent call last): File "example.py", line 49, in <module> sample = transform( File "/python3.10/site-packages/albumentations/core/composition.py", line 207, in __call__ p.preprocess(data) File "/python3.10/site-packages/albumentations/core/utils.py", line 83, in preprocess data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to") File "/python3.10/site-packages/albumentations/core/utils.py", line 91, in check_and_convert return self.convert_to_albumentations(data, rows, cols) File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 140, in convert_to_albumentations return convert_keypoints_to_albumentations( File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 269, in convert_keypoints_to_albumentations return [ File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 270, in <listcomp> convert_keypoint_to_albumentations(kp, source_format, rows, cols, check_validity, angle_in_degrees) File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 220, in convert_keypoint_to_albumentations check_keypoint(keypoint, rows, cols) File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 153, in check_keypoint if not 0 <= value < size: TypeError: '<=' not supported between instances of 'int' and 'list' ``` ## Expected behavior This should work, it looks like a "natural" thing to have: to treat keypoints the same way as bounding boxes. Currently I managed to make it work with the following snippet: ```python sample = transform( image=image, bboxes=boxes, category_ids=np.ones(len(boxes)), keypoints=np.asarray(keypoints).reshape(-1, 2), # Merge keypoints ) keypoints = np.asarray(sample["keypoints"]).reshape(-1, 5, 2) # Transform/Reshape them back ``` However this feels like a dirty workaround. Perhaps there should be an easier way to achieve the same, but without bugs ![desired](https://github.com/albumentations-team/albumentations/assets/4759946/af29f61a-3cf3-4ddc-ba1a-5a846a7f4e37) ## Environment - Albumentations version (e.g., 0.1.8): `1.3.1` - Python version (e.g., 3.7): `3.10` - OS (e.g., Linux): any - How you installed albumentations (`conda`, `pip`, source): pip - Any other relevant information:
1medium
Title: [Feature request] CogvideoX Controlnet integration for 5B / 2B Body: **Is your feature request related to a problem? Please describe.** Came across and would be useful addition https://github.com/TheDenk/cogvideox-controlnet **Describe the solution you'd like.** If possible add the controlnet support for CogVideoX. The existing code is based on diffusers only **Describe alternatives you've considered.** N.A. **Additional context.** N.A.
1medium
Title: Error "TypeError: 'NoneType' object is not iterable" triggered by modellib.MaskRCNN in demo.ipynb Body: Hi, I have an error when I try to run the demo.ipynb file. I guess it's due to modellib.MasrRCNN function (see error below). My first guess is that it's due to tensorflow version that i'm using which is 2.2.0. But when I changed that to an older version 1.3.0, there's an incompatibility between keras and tensorflow as keras requires at least tensorflow version 2.2. Can anyone help please? Thanks, --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-de1010a949a4> in <module> 1 # Create model object in inference mode. ----> 2 model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) 3 # Load weights trained on MS-COCO 4 model.load_weights(COCO_MODEL_PATH, by_name=True) ~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in __init__(self, mode, config, model_dir) 1835 self.model_dir = model_dir 1836 self.set_log_dir() -> 1837 self.keras_model = self.build(mode=mode, config=config) 1838 1839 def build(self, mode, config): ~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in build(self, mode, config) 1899 else: 1900 _, C2, C3, C4, C5 = resnet_graph(input_image, config.BACKBONE, -> 1901 stage5=True, train_bn=config.TRAIN_BN) 1902 # Top-down Layers 1903 # TODO: add assert to varify feature map sizes match what's in config ~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in resnet_graph(input_image, architecture, stage5, train_bn) 178 # Stage 1 179 x = KL.ZeroPadding2D((3, 3))(input_image) --> 180 x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x) 181 x = BatchNorm(name='bn_conv1')(x, training=train_bn) 182 x = KL.Activation('relu')(x) c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs) 895 # Build layer if applicable (if the `build` method has been 896 # overridden). --> 897 self._maybe_build(inputs) 898 cast_inputs = self._maybe_cast_inputs(inputs) 899 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs) 2414 # operations. 2415 with tf_utils.maybe_init_scope(self): -> 2416 self.build(input_shapes) # pylint:disable=not-callable 2417 # We must set also ensure that the layer is marked as built, and the build 2418 # shape is stored since user defined build functions may not be calling c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\layers\convolutional.py in build(self, input_shape) 161 constraint=self.kernel_constraint, 162 trainable=True, --> 163 dtype=self.dtype) 164 if self.use_bias: 165 self.bias = self.add_weight( c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, synchronization, aggregation, **kwargs) 575 synchronization=synchronization, 576 aggregation=aggregation, --> 577 caching_device=caching_device) 578 if regularizer is not None: 579 # TODO(fchollet): in the future, this should be handled at the c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\training\tracking\base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter) 741 dtype=dtype, 742 initializer=initializer, --> 743 **kwargs_for_getter) 744 745 # If we set an initializer and the variable processed it, tracking will not c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner) 139 synchronization=synchronization, 140 aggregation=aggregation, --> 141 shape=variable_shape if variable_shape else None) 142 143 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in __call__(cls, *args, **kwargs) 257 def __call__(cls, *args, **kwargs): 258 if cls is VariableV1: --> 259 return cls._variable_v1_call(*args, **kwargs) 260 elif cls is Variable: 261 return cls._variable_v2_call(*args, **kwargs) c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape) 218 synchronization=synchronization, 219 aggregation=aggregation, --> 220 shape=shape) 221 222 def _variable_v2_call(cls, c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in <lambda>(**kwargs) 196 shape=None): 197 """Call on Variable class. Useful to force the signature.""" --> 198 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) 199 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access 200 previous_getter = _make_getter(getter, previous_getter) c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variable_scope.py in default_variable_creator(next_creator, **kwargs) 2596 synchronization=synchronization, 2597 aggregation=aggregation, -> 2598 shape=shape) 2599 else: 2600 return variables.RefVariable( c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in __call__(cls, *args, **kwargs) 261 return cls._variable_v2_call(*args, **kwargs) 262 else: --> 263 return super(VariableMetaclass, cls).__call__(*args, **kwargs) 264 265 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape) 1432 aggregation=aggregation, 1433 shape=shape, -> 1434 distribute_strategy=distribute_strategy) 1435 1436 def _init_from_args(self, c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape) 1565 with ops.name_scope("Initializer"), device_context_manager(None): 1566 initial_value = ops.convert_to_tensor( -> 1567 initial_value() if init_from_fn else initial_value, 1568 name="initial_value", dtype=dtype) 1569 if shape is not None: c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py in <lambda>() 119 (type(init_ops.Initializer), type(init_ops_v2.Initializer))): 120 initializer = initializer() --> 121 init_val = lambda: initializer(shape, dtype=dtype) 122 variable_dtype = dtype.base_dtype 123 if use_resource is None: c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\init_ops_v2.py in __call__(self, shape, dtype) 556 else: 557 limit = math.sqrt(3.0 * scale) --> 558 return self._random_generator.random_uniform(shape, -limit, limit, dtype) 559 560 def get_config(self): c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\init_ops_v2.py in random_uniform(self, shape, minval, maxval, dtype) 1066 op = random_ops.random_uniform 1067 return op( -> 1068 shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed) 1069 1070 def truncated_normal(self, shape, mean, stddev, dtype): c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\random_ops.py in random_uniform(shape, minval, maxval, dtype, seed, name) 280 maxval = 1 281 with ops.name_scope(name, "random_uniform", [shape, minval, maxval]) as name: --> 282 shape = tensor_util.shape_tensor(shape) 283 # In case of [0,1) floating results, minval and maxval is unused. We do an 284 # `is` comparison here since this is cheaper than isinstance or __eq__. c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\tensor_util.py in shape_tensor(shape) 1013 # not convertible to Tensors because of mixed content. 1014 shape = tuple(map(tensor_shape.dimension_value, shape)) -> 1015 return ops.convert_to_tensor(shape, dtype=dtype, name="shape") 1016 1017 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types) 1339 1340 if ret is None: -> 1341 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) 1342 1343 if ret is NotImplemented: c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref) 319 as_ref=False): 320 _ = as_ref --> 321 return constant(v, dtype=dtype, name=name) 322 323 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name) 260 """ 261 return _constant_impl(value, dtype, shape, name, verify_shape=False, --> 262 allow_broadcast=True) 263 264 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast) 268 ctx = context.context() 269 if ctx.executing_eagerly(): --> 270 t = convert_to_eager_tensor(value, ctx, dtype) 271 if shape is None: 272 return t c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 93 except AttributeError: 94 dtype = dtypes.as_dtype(dtype).as_datatype_enum ---> 95 ctx.ensure_initialized() 96 return ops.EagerTensor(value, ctx.device_name, dtype) 97 c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in ensure_initialized(self) 500 opts = pywrap_tfe.TFE_NewContextOptions() 501 try: --> 502 config_str = self.config.SerializeToString() 503 pywrap_tfe.TFE_ContextOptionsSetConfig(opts, config_str) 504 if self._device_policy is not None: c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in config(self) 878 """Return the ConfigProto with all runtime deltas applied.""" 879 # Ensure physical devices have been discovered and config has been imported --> 880 self._initialize_physical_devices() 881 882 config = config_pb2.ConfigProto() c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in _initialize_physical_devices(self) 1167 self._physical_devices = [ 1168 PhysicalDevice(name=d.decode(), -> 1169 device_type=d.decode().split(":")[1]) for d in devs] 1170 # Construct the visible device list from all physical devices but ignore 1171 # XLA devices TypeError: 'NoneType' object is not iterable
2hard
Title: Garmin login is broken I will try to fix it as soon as possible(Garmin 修改了登陆方式,我尽快尝试解决) Body:
1medium
Title: speed up checksum checks for large files Body: Hey DVC devs! First off, a huge thank you for creating this brilliant project. It has been a game-changer for our data science projects. In our work with lot of large input files (2 - 50+ GB), we've noticed that the checksum checks for these files are incredibly slow. Is there any plan or interest in speeding up the checksum checks? Are there any options to utilize integrated file system checksums or faster implementations of checksum calculations? Thanks a ton! Tom
1medium
Title: False positive integrity check failures when balanacer is running and moving chunks Body: #### Arctic Version 1.58 #### Arctic Store VersionStore, NdarrayStore #### Description of problem and/or code sample that reproduces the issue We a DataIntegrityException, where the number of expected updated segments (with new parent in their set) is double the number of the segments meant to change. Suspicions is that this happens at the balancer runs at this time and segments/document may exist briefly in more than one replica sets as they are being moved. This is even more likely because the spec we use for the update_many query only uses _id: [relevant code here](https://github.com/manahl/arctic/blob/v1.58.0/arctic/store/_ndarray_store.py#L368) This doesn't use the sharding key (symbol) and in effect is broadcasted to all replica set servers. ``` INFO - return arctic.decorators.mongo_retry(f)(*args, **kwargs) [2018-01-31 03:34:18,158] INFO - File "build/bdist.linux-x86_64/egg/arctic/decorators.py", line 50, in f_retry [2018-01-31 03:34:18,158] INFO - return f(*args, **kwargs) [2018-01-31 03:34:18,158] INFO - File "build/bdist.linux-x86_64/egg/xxxxxxxxxxxxxxx/version_store.py", line 224, in write [2018-01-31 03:34:18,159] INFO - prune_previous_version=prune_previous_version, **kwargs) [2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/decorators.py", line 50, in f_retry [2018-01-31 03:34:18,159] INFO - return f(*args, **kwargs) [2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/version_store.py", line 574, in write [2018-01-31 03:34:18,159] INFO - handler.write(self._arctic_lib, version, symbol, data, previous_version, **kwargs) [2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/xxxxxxxxxxxxxxxx/_ts_ndarray_store.py", line 130, in write [2018-01-31 03:34:18,159] INFO - super(TimeSeriesNdarrayStore, self).write(mongoose_lib, version, symbol, item, previous_version, **kwargs) [2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 414, in write [2018-01-31 03:34:18,160] INFO - self._do_append(collection, version, symbol, item[previous_version['up_to']:], previous_version, dirty_append=True) [2018-01-31 03:34:18,160] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 315, in _do_append [2018-01-31 03:34:18,160] INFO - self._concat_and_rewrite(collection, version, symbol, item, previous_version) [2018-01-31 03:34:18,160] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 372, in _concat_and_rewrite [2018-01-31 03:34:18,160] INFO - len(unchanged_segment_ids) [2018-01-31 03:34:18,160] INFO - DataIntegrityException: Symbol: GETF_median_volume_20d:97 update_many updated 2 segments instead of 1 ```
2hard
Title: Incorrect plotting of exactly overlapping scatter with `hue` and `hue_order` Body: While working with `sns.scatterplot` for representing locations on a grid, I discovered an issue where using `hue` and `hue_order` produces an incorrect plot: markers that should be perfectly overlapping—they have identical (`x`, `y`) coordinates—are drawn at a small offset, such that the edge of one can be seen intersecting the other. Here's a minimal example that reproduces the issue with `matplotlib 3.9.1` and `seaborn 0.13.2`: ```python import matplotlib import matplotlib.pyplot as plt import seaborn as sns import pandas as pd df = pd.DataFrame.from_dict({ 'x': [6.3, 6.3, 6.3, 6.3, 6.633333, 6.633333, 6.633333, 6.633333, 33.48, 33.48, 33.48, 33.48, 33.813333, 33.813333, 33.813333, 33.813333], 'y': [-12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0], 'locid': ['loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2', 'loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2'] }) sns.scatterplot( data=df, x='x', y='y', marker="o", hue='locid', hue_order=['loc1'], ) print('Pandas version: ', pd.__version__) # 2.2.2 print('Matplotlib version: ', matplotlib.__version__) # 3.9.1 print('Seaborn version: ', sns.__version__) # 0.13.2 ``` That code produces the following plot: ![bugPlot](https://github.com/user-attachments/assets/9f3b7511-f90c-48ca-b5e0-4bde179dc146) where at each corner, the edge of the second marker is clearly seen to intersect the face of the first From my brief dive into this problem: 1. As in the example, it doesn't matter whether a tall stack of markers are made to overlap: there's only to points with the exact (6.3, -12.42) coordinates and the problem is there. 2. The issue is seaborn-specific. Using matplotlib's `plt.scatter` does yield a correct plot. 3. Both `hue` and `hue_order` need to be used in order for the issue to appear. Slicing the data with `df[df.locid == 'loc1']` makes a correct plot. 4. The problem persists even with `marker='.' `, `marker='s'`, `marker='v'` and `marker='d'`, but not with `marker='x'`.
1medium
Title: [Bug]: subprocess.Popen gets stuck when running sweep Body: ### Describe the bug wandb Version: 0.19.2 Python 3.10.12 Ubuntu 22.04 Inside my python script I spawn a subprocess like: ``` command = ["bash", "-c", "ffmpeg -framerate 24 -i frame_%d.png output.mp4] subprocess.Popen(command) ``` And it works perfectly fine if I run it directly. But if I create a sweep and start an agent in cli it gets stuck indefinitely. Also the output of this ffmpeg subprocess is always truncated no matter how I try to get it - print in console or redirect to file. I'm not a specialist in multi processing but apparently wandb does something that leads to subprocess failing?
1medium
Title: reques和httpx对同一个请求结果不一致 Body: he = { "cookie": "csrf_session_id=31bba393b7ed2c621edac0317048ba0a;tt_scid=VauYpHMBxVl.JRELyMyZUOvGbNLE0LM2FWR3I.-7y9ztQDU8xszugxiCKVYoVEWz84d0;ttcid=f38b7c0cc7634a3fb3e8fd56f40f60d542;msToken=5lVMnWk_WrZpmZi7yYDiwJQ0JUzVKb5U9VYKrWS47FHKMNVj04JvrtFjMLhKjCHRzx_EUIp8D-afCEqOk_kSRdGMQfkeYfjMZSUPaLaR;passport_csrf_token=49a20b5140e0a8d953007c360a3d48bd;passport_csrf_token_default=49a20b5140e0a8d953007c360a3d48bd;ttwid=1%7CtsDAT1Sn8VEDfr5JM0SuBwNAUl5KOKZvICLEf_13E28%7C1698977950%7Cb3159479b03f4c3a297cb9f6df8dd4b1c1dc23268a29b0a2a1c3724722a7efb3;__ac_nonce=06544589e0034d97e1fa;", "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36", "referer": "https://live.douyin.com/", } res = httpx.get( 'https://live.douyin.com/webcast/im/fetch/?resp_content_type=protobuf&did_rule=3&device_id=&app_name=douyin_web&endpoint=live_pc&support_wrds=1&user_unique_id=7297054683247429129&identity=audience&need_persist_msg_count=15&room_id=7297021771922918171&version_code=180800&last_rtt=-1&live_id=1&aid=6383&fetch_rule=1&cursor=&internal_ext=&device_platform=web&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Mozilla&browser_version=5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/117.0.0.0%20Safari/537.36&browser_online=true&tz_name=Asia/Shanghai&msToken=A9V7O6PityCrBntXftiZIDr7HXeYcho1z84ZmsyM1w9wbUIFrtfVrkcafKuV6dtVhFmhzSQdcnYI_bqeCF2ic9dU56raNUGiuQOFZhuJWLEMiZ8ZIA==&X-Bogus=DFSzswVYdTUANykmtFdl0M9WX7jQ', headers=he,) print(res.text) 使用httpx返回是空,requests返回是protobuf类型数据
1medium
Title: Support gradient accumulation in ExpertBackend Body: Larger Transformer models are trained with larger batches, it's probably beneficial to accumulate gradients from several backward requests before making a step. It can be implemented in `ExpertBackend.apply_gradients()`, and the number of processed examples is already available. The tricky part is to implement loss averaging correctly: since we might have batches of different sizes (at least at the server side), we might need to scale the gradients from each batch according to its relative size.
1medium
Title: Unable to build gevent in aws codebuild agents. Help identify the poblem Body: * gevent version: gevent==21.1.2 installed via pip * Python version: python:3.7-alpine docker image * Operating System: python:3.7-alpine ### Description: Unable to build gevent when run build on aws codebuild agent. Builds fine locally. For me unclear how come. Also created ticket to AWS support. ### Stack trace ```python-traceback running build_ext -- generating cffi module 'build/temp.linux-x86_64-3.7/gevent.libuv._corecffi.c' creating build/temp.linux-x86_64-3.7 Running '(cd "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev" && sh ./configure -C > configure-output.txt )' in /tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136 config.status: error: in `/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev': config.status: error: Something went wrong bootstrapping makefile fragments for automatic dependency tracking. Try re-running configure with the '--disable-dependency-tracking' option to at least be able to build the package (albeit without support for automatic dependency tracking). See `config.log' for more details Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 280, in <module> main() File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 205, in build_wheel metadata_directory) File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 222, in build_wheel wheel_directory, config_settings) File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 207, in _build_with_temp_dir self.run_setup() File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 259, in run_setup self).run_setup(setup_script=setup_script) File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 150, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 479, in <module> run_setup(EXT_MODULES) File "setup.py", line 463, in run_setup "signal_os_incompat = gevent.monkey:_subscribe_signal_os", File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run self.run_command('build') File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/usr/local/lib/python3.7/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 143, in run ext.sources[0] = make_mod(self.build_temp, pre_run) File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 128, in make_mod pre_run(ext, ffi) File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 364, in pre_run action() File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuplibev.py", line 55, in configure_libev system(libev_configure_command) File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 195, in system if _system(cmd, cwd=cwd, env=env, **kwargs): File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 191, in _system return check_call(cmd, cwd=cwd, env=env, **kwargs) File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '(cd "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev" && sh ./configure -C > configure-output.txt )' returned non-zero exit status 1. ----------------------------------------  ERROR: Failed building wheel for gevent ``` ### What I've run: ```docker FROM python:3.7-alpine RUN mkdir /cb_f WORKDIR /cb_f ENV PYTHONPATH "${PYTHONPATH}:/cb_f" RUN \ apk add --no-cache postgresql-libs bash supervisor && \ apk add --no-cache --virtual .build-deps gcc libc-dev g++ postgresql-dev python3-dev libffi-dev musl-dev make build-base git alpine-sdk automake autoconf RUN apk add libtool ADD requirements.txt /cb_f/ RUN pip install -r requirements.txt --no-cache-dir && \ apk --purge del .build-deps ADD configs/supervisord.conf /etc/ ADD configs/supervisord_2.conf /etc/ ADD configs/supervisord_3.conf /etc/ COPY . /cb_f/ EXPOSE 5000 ```
2hard
Title: [MNT]: Consolidate tick API Body: ### Summary The Tick class/concept consistents of a *ticklabel*, a *tickline* (the marker) and a *gridline*. In addition there's the *location* as a fundmental property. We have a lot of methods to handle ticks. There are methods on Axis to configure all these properties - often in flavors of major/minor and global (i.e. a function that lets you select major/minor/both). These methods would typically be used via `ax.xaxis.get_majorticklabels()`. Additionally, a subset of these wrappers is exposed on the Axes level, then in the flavors of x/y. ![Image](https://github.com/user-attachments/assets/b6ceebf4-e6c5-4d5a-bb30-6225c1421fb3) Overall we have 16 such methods on Axes and 14 on Axis that directly work on Tick instances or parts of them. ### Proposed fix We should discourage direct interaction with ticks or their components (ticklabel, tickline, gridline). As ticks are volatile configuring the instances explicitly may not persist if the plot is changed later. Therefore, I would like to get rid of the high-level Axes methods that give access to these components: `get_xticklabels()`, `get_xmajorticklabels()`, `get_xminorticklabels()`, `get_xgridlines()`, `get_xticklines()` (and same for y). People should use `tick_params()` instead where possible, e.g. `ax.tick_params(labelsize=10)` instead of `for label in ax.get_xticklabels(): label.set_fontsize(10)`. This is not only shorter but also configures the common/universal tick property instead of individual volatile instances. Since `tick_params()` does currently not, and likely will never provide full control on all aspects (e.g. [this example](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#simple-picking-lines-rectangles-and-text) makes tick labels pickable), users should use the underlying Axis functions if they really must access the indivdual tick(components), i.e. use `ax.xaxis.get_ticklabels()` instead of ax.get_xticklabels()`. While removing this bunch of wrapper functions on Axes is massive API change, I think we eventually want to go there (slowly through a pending deprecation), because these functions are encouraging nowadays bad practice. - The weaker alternative would be to only discourage. Concrete: - pending deprecate the `Axes` methods: `get_xticklabels()`, `get_xmajorticklabels()`, `get_xminorticklabels()`, `get_xgridlines()`, `get_xticklines()` (and same for y). - Recommend to use `Axes.tick_params()` instead where possible. - Recommend to ues the respective `Axis` methods if more control is needed, e.g. `ax.xaxis.get_ticklabels()` - On all methods that return Tick components, warn that this only affects the current instances and future changes to the plot may create new ticks. --- Usage statistics from GitHub query Query string: `/\.get_gridlines\(/ NOT path:_base.py NOT path:axes.py NOT path:axis.py language:Python NOT is:fork NOT repo:matplotlib/matplotlib` ![Image](https://github.com/user-attachments/assets/6a894008-5124-4929-8ea3-6063d4cb73c8)
2hard
Title: Start Pulsar WSGI Server from Docker Body: Hey, I am using Pulsar for JSON-RPC and everything works fine locally, but I can't send calls from outside the Docker container, where the WSGI Server is running. Do I need to expose other ports then the one that I expose with `--bind` or what could be the issue? I was hoping I could simply expose a JSON-RPC API with pulsar, like I do it with flask and consume that with other pulsar or node apps.
1medium
Title: initialize_all_variables is deprecated Body: for the code of tf14 and tf15 WARNING:tensorflow:From full_code.py:62 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use `tf.global_variables_initializer` instead.
1medium
Title: Need to include inputs in existing templates from the ModelView class. Body: ### Environment Flask-Appbuilder version:3.4.4 ### Describe the expected results I wanted to modify the edit template such that I can include my own inputs. But the existing functions/methods don't allow that. So instead I did the below instead of the conventional edit_template = 'xx.html'. ```python class SQLDepoModelView(ModelView): @expose("/edit") @has_access def edit(self): keywords = [keyword_obj.keyword for keyword_obj in db.session.query(sql_depo_keyword_helper).all()] return self.render_template('edit_template.html', keywords=keywords) ``` ### Describe the actual results I'll get a "jinja2.exceptions.UndefinedError: 'widgets' is undefined" instead. ```pytb * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off C:\ProgramData\Anaconda3\lib\site-packages\flask_sqlalchemy\__init__.py:873: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning. 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and ' 2022-07-19 13:07:17,207:INFO:flask_appbuilder.base:Registering class IndexView on menu 2022-07-19 13:07:17,207:INFO:flask_appbuilder.baseviews:Registering route / ('GET',) 2022-07-19 13:07:17,223:INFO:flask_appbuilder.base:Registering class UtilView on menu 2022-07-19 13:07:17,223:INFO:flask_appbuilder.baseviews:Registering route /back ('GET',) 2022-07-19 13:07:17,238:INFO:flask_appbuilder.base:Registering class LocaleView on menu 2022-07-19 13:07:17,238:INFO:flask_appbuilder.baseviews:Registering route /lang/<string:locale> ('GET',) 2022-07-19 13:07:17,238:INFO:flask_appbuilder.base:Registering class SecurityApi on menu 2022-07-19 13:07:17,238:INFO:flask_appbuilder.api:Registering route /api/v1/security/login ['POST'] 2022-07-19 13:07:17,238:INFO:flask_appbuilder.api:Registering route /api/v1/security/refresh ['POST'] 2022-07-19 13:07:17,254:INFO:flask_appbuilder.base:Registering class ResetPasswordView on menu 2022-07-19 13:07:17,254:INFO:flask_appbuilder.baseviews:Registering route /resetpassword/form ['GET'] 2022-07-19 13:07:17,254:INFO:flask_appbuilder.baseviews:Registering route /resetpassword/form ['POST'] 2022-07-19 13:07:17,285:INFO:flask_appbuilder.base:Registering class ResetMyPasswordView on menu 2022-07-19 13:07:17,287:INFO:flask_appbuilder.baseviews:Registering route /resetmypassword/form ['GET'] 2022-07-19 13:07:17,287:INFO:flask_appbuilder.baseviews:Registering route /resetmypassword/form ['POST'] 2022-07-19 13:07:17,303:INFO:flask_appbuilder.base:Registering class UserInfoEditView on menu 2022-07-19 13:07:17,303:INFO:flask_appbuilder.baseviews:Registering route /userinfoeditview/form ['GET'] 2022-07-19 13:07:17,303:INFO:flask_appbuilder.baseviews:Registering route /userinfoeditview/form ['POST'] 2022-07-19 13:07:17,319:INFO:flask_appbuilder.base:Registering class AuthDBView on menu 2022-07-19 13:07:17,319:INFO:flask_appbuilder.baseviews:Registering route /login/ ['GET', 'POST'] 2022-07-19 13:07:17,319:INFO:flask_appbuilder.baseviews:Registering route /logout/ ('GET',) 2022-07-19 13:07:17,335:INFO:flask_appbuilder.base:Registering class UserDBModelView on menu List Users 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/action_post ['POST'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/add ['GET', 'POST'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api ['GET'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/create ['POST'] 2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/get/<pk> ['GET'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/read ['GET'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/readvalues ['GET'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/update/<pk> ['PUT'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/download/<string:filename> ('GET',) 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/list/ ('GET',) 2022-07-19 13:07:17,366:INFO:flask_appbuilder.baseviews:Registering route /users/show/<pk> ['GET'] 2022-07-19 13:07:17,366:INFO:flask_appbuilder.baseviews:Registering route /users/userinfo/ ('GET',) 2022-07-19 13:07:17,414:INFO:flask_appbuilder.base:Registering class RoleModelView on menu List Roles 2022-07-19 13:07:17,414:INFO:flask_appbuilder.baseviews:Registering route /roles/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,466:INFO:flask_appbuilder.baseviews:Registering route /roles/action_post ['POST'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/add ['GET', 'POST'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/create ['POST'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/get/<pk> ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/read ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/readvalues ['GET'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/update/<pk> ['PUT'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/download/<string:filename> ('GET',) 2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,498:INFO:flask_appbuilder.baseviews:Registering route /roles/list/ ('GET',) 2022-07-19 13:07:17,591:INFO:flask_appbuilder.baseviews:Registering route /roles/show/<pk> ['GET'] 2022-07-19 13:07:17,668:INFO:flask_appbuilder.base:Registering class UserStatsChartView on menu User's Statistics 2022-07-19 13:07:17,668:INFO:flask_appbuilder.baseviews:Registering route /userstatschartview/chart/ ('GET',) 2022-07-19 13:07:17,684:INFO:flask_appbuilder.baseviews:Registering route /userstatschartview/chart/<group_by> ('GET',) 2022-07-19 13:07:17,731:INFO:flask_appbuilder.base:Registering class PermissionModelView on menu Base Permissions 2022-07-19 13:07:17,731:INFO:flask_appbuilder.baseviews:Registering route /permissions/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/action_post ['POST'] 2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/add ['GET', 'POST'] 2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/api ['GET'] 2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:17,762:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/create ['POST'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/get/<pk> ['GET'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/read ['GET'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/readvalues ['GET'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/update/<pk> ['PUT'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/download/<string:filename> ('GET',) 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/list/ ('GET',) 2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/show/<pk> ['GET'] 2022-07-19 13:07:17,871:INFO:flask_appbuilder.base:Registering class ViewMenuModelView on menu Views/Menus 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/action_post ['POST'] 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/add ['GET', 'POST'] 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api ['GET'] 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:17,918:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/create ['POST'] 2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/get/<pk> ['GET'] 2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/read ['GET'] 2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/readvalues ['GET'] 2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/update/<pk> ['PUT'] 2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/download/<string:filename> ('GET',) 2022-07-19 13:07:18,012:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,090:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/list/ ('GET',) 2022-07-19 13:07:18,090:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/show/<pk> ['GET'] 2022-07-19 13:07:18,137:INFO:flask_appbuilder.base:Registering class PermissionViewModelView on menu Permission on Views/Menus 2022-07-19 13:07:18,137:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/action_post ['POST'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/add ['GET', 'POST'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api ['GET'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/create ['POST'] 2022-07-19 13:07:18,184:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/get/<pk> ['GET'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/read ['GET'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/readvalues ['GET'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/update/<pk> ['PUT'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/download/<string:filename> ('GET',) 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/list/ ('GET',) 2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/show/<pk> ['GET'] 2022-07-19 13:07:18,262:INFO:flask_appbuilder.base:Registering class MenuApi on menu 2022-07-19 13:07:18,305:INFO:flask_appbuilder.api:Registering route /api/v1/menu/ ['GET'] 2022-07-19 13:07:18,570:INFO:flask_appbuilder.base:Registering class SQLDepoModelView on menu All SQLs 2022-07-19 13:07:18,570:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,586:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/action_post ['POST'] 2022-07-19 13:07:18,593:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/add ['GET', 'POST'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api ['GET'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/create ['POST'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/get/<pk> ['GET'] 2022-07-19 13:07:18,647:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/read ['GET'] 2022-07-19 13:07:18,647:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/readvalues ['GET'] 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/update/<pk> ['PUT'] 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/download/<string:filename> ('GET',) 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/edit ('GET',) 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/list/ ('GET',) 2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/show/<pk> ['GET'] 2022-07-19 13:07:18,712:INFO:flask_appbuilder.base:Registering class KeywordView on menu Keyword 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/action/<string:name>/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/action_post ['POST'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/add ['GET', 'POST'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api ['GET'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/column/add/<col_name> ['GET'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/column/edit/<col_name> ['GET'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/create ['POST'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/delete/<pk> ['DELETE'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/get/<pk> ['GET'] 2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/read ['GET'] 2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/readvalues ['GET'] 2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/update/<pk> ['PUT'] 2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/delete/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,758:INFO:flask_appbuilder.baseviews:Registering route /keywordview/download/<string:filename> ('GET',) 2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/edit/<pk> ['GET', 'POST'] 2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/list/ ('GET',) 2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/show/<pk> ['GET'] 2022-07-19 13:07:18,952:INFO:werkzeug: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) 2022-07-19 13:07:21,836:ERROR:app:Exception on /sqldepomodelview/edit [GET] Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "C:\ProgramData\Anaconda3\lib\site-packages\flask\_compat.py", line 39, in reraise raise value File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\security\decorators.py", line 148, in wraps return f(self, *args, **kwargs) File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\views.py", line 31, in edit return self.render_template('edit_template.html', keywords=keywords) File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\baseviews.py", line 288, in render_template template, **dict(list(kwargs.items()) + list(self.extra_args.items())) File "C:\ProgramData\Anaconda3\lib\site-packages\flask\templating.py", line 140, in render_template ctx.app, File "C:\ProgramData\Anaconda3\lib\site-packages\flask\templating.py", line 120, in _render rv = template.render(context) File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 1291, in render self.environment.handle_exception() File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 925, in handle_exception raise rewrite_traceback_stack(source=source) File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\edit_template.html", line 1, in top-level template code {% extends "appbuilder/general/model/edit.html" %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 2, in top-level template code {% import 'appbuilder/general/lib.html' as lib %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\base.html", line 1, in top-level template code {% extends base_template %} File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\base_template.html", line 1, in top-level template code {% extends 'appbuilder/baselayout.html' %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\baselayout.html", line 2, in top-level template code {% import 'appbuilder/baselib.html' as baselib %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\init.html", line 37, in top-level template code {% block body %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\baselayout.html", line 19, in block 'body' {% block content %} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 23, in block 'content' {% block edit_form %} File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\edit_template.html", line 4, in block 'edit_form' {{ super() }} File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 25, in block 'edit_form' {{ widgets.get('edit')(form_action=form_action)|safe }} File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 474, in getattr return getattr(obj, attribute) jinja2.exceptions.UndefinedError: 'widgets' is undefined 2022-07-19 13:07:21,853:INFO:werkzeug:127.0.0.1 - - [19/Jul/2022 13:07:21] "GET /sqldepomodelview/edit?pk=1 HTTP/1.1" 500 - ``` ### Steps to reproduce After setting up the model.py and view.py and any other preparations, I ran "flask run" in the command prompt
1medium
Title: Please release new version - Cannot revoke all tokens: AccessToken matching query does not exist Body: I am getting a DoesNotExist exception trying to clean up when a user logs out. django-oauth-toolkit==1.2.0 ``` from itertools import chain from oauth2_provider.models import get_access_token_model, get_application_model, get_refresh_token_model access_tokens = get_access_token_model().objects.filter(user=user) refresh_tokens = get_refresh_token_model().objects.filter(user=user) for token in chain(access_tokens, refresh_tokens): token.revoke() ``` It appears this is fixed in master: https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/models.py#L397 It looks like the fix is a year old: https://github.com/jazzband/django-oauth-toolkit/commit/5b51da74019046ef4c8c81c9975db029a2113d52#diff-ac3d3b1e30eb6e828386263c3a1256ca Please release a new version.
1medium
Title: image size problem Body: when i use format PascalVOC to save xml result, the <width> and <height> elements will save as 0 when the image size is 256*256
1medium
Title: I want to integrate yolov8's detection + classification into a network and turn it into a multi-task network. Is there any existing case for reference when I do this? Body: ### Search before asking - [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests. ### Description I want to integrate yolov8's detection + classification into a network and turn it into a multi-task network. Is there any existing case for reference when I do this? ### Use case _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [x] Yes I'd like to help by submitting a PR!
2hard
Title: Update to support werkzeug >= 0.12 Body: > Could not find a version that matches werkzeug==0.12,>=0.14 > - Werkzeug [required: ==0.12, installed: 0.14.1] This is colliding with everything that is using the new version. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.45.1 * Operating System and Python version: Ubuntu 16.04 TLS * The output of `pip freeze`: `absl-py==0.2.2 argcomplete==1.9.2 astor==0.6.2 base58==0.2.4 bleach==1.5.0 boto3==1.7.35 botocore==1.10.35 certifi==2018.4.16 cfn-flip==1.0.3 chardet==3.0.4 click==6.7 docutils==0.14 durationpy==0.5 Flask==1.0.2 future==0.16.0 gast==0.2.0 grpcio==1.12.1 h5py==2.8.0 hjson==3.0.1 html5lib==0.9999999 idna==2.6 itsdangerous==0.24 Jinja2==2.10 jmespath==0.9.3 kappa==0.6.0 Keras==2.2.0 Keras-Applications==1.0.2 Keras-Preprocessing==1.0.1 lambda-packages==0.19.0 Markdown==2.6.11 MarkupSafe==1.0 numpy==1.14.4 pandas==0.23.0 placebo==0.8.1 protobuf==3.5.2.post1 python-dateutil==2.6.1 python-slugify==1.2.4 pytz==2018.4 PyYAML==3.12 requests==2.18.4 s3transfer==0.1.13 scipy==1.1.0 six==1.11.0 tensorboard==1.8.0 tensorflow==1.8.0 termcolor==1.1.0 toml==0.9.4 tqdm==4.19.1 troposphere==2.3.0 Unidecode==1.0.22 urllib3==1.22 Werkzeug==0.14.1 wsgi-request-logger==0.4.6 zappa==0.45.1`
1medium
Title: [BUG]TypeError: Profile.__init__() missing 1 required positional argument: 'headers' Body: **描述出现的错误** 对bug的清晰而简洁的描述。 gui页面解析用户主页闪退,后台报错:TypeError: Profile.__init__() missing 1 required positional argument: 'headers' **bug复现** 复现这次行为的步骤: mac系统
1medium
Title: [feat] Test `pixtral` as a OCR strategy Body: https://mistral.ai/news/pixtral-12b/
1medium
Title: google/gemma-3-27b-it context lenght issue Body: i have deployed the google/gemma-3-27b-it model on 4 H100 GPUS, it only supports 23k context length, when i increased to support 128k context window as it supports, i endup with following errors i even tried with 64k context window, it went into cuda out of memeory issues 2025-03-13T08:36:37.262517Z INFO text_generation_launcher: Runtime environment: Target: x86_64-unknown-linux-gnu Cargo version: 1.85.0 Commit sha: 411a28288de9218e2684dccbace481a1abdb0cef Docker label: sha-411a282 nvidia-smi: Thu Mar 13 08:36:36 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.127.08 Driver Version: 550.127.08 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA H100 80GB HBM3 On | 00000000:45:00.0 Off | 0 | | N/A 29C P0 70W / 700W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 1 NVIDIA H100 80GB HBM3 On | 00000000:4E:00.0 Off | 0 | | N/A 29C P0 69W / 700W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 2 NVIDIA H100 80GB HBM3 On | 00000001:1B:00.0 Off | 0 | | N/A 31C P0 71W / 700W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ | 3 NVIDIA H100 80GB HBM3 On | 00000001:24:00.0 Off | 0 | | N/A 28C P0 73W / 700W | 1MiB / 81559MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ xpu-smi: N/A hpu-smi: N/A 2025-03-13T08:36:37.262563Z INFO text_generation_launcher: Args { model_id: "google/gemma-3-27b-it", revision: None, validation_workers: 2, sharded: Some( true, ), num_shard: Some( 4, ), quantize: None, speculate: None, dtype: None, kv_cache_dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_top_n_tokens: 5, max_input_tokens: Some( 32000, ), max_input_length: None, max_total_tokens: Some( 64000, ), waiting_served_ratio: 0.3, max_batch_prefill_tokens: Some( 32000, ), max_batch_total_tokens: None, max_waiting_tokens: 20, max_batch_size: None, cuda_graphs: None, hostname: "gemma-3-27b-it-5d7964566c-xnkck", port: 8000, shard_uds_path: "/tmp/text-generation-server", master_addr: "localhost", master_port: 29500, huggingface_hub_cache: Some( "/huggingface/hub", ), weights_cache_override: None, disable_custom_kernels: false, cuda_memory_fraction: 1.0, rope_scaling: None, rope_factor: None, json_output: false, otlp_endpoint: None, otlp_service_name: "text-generation-inference.router", cors_allow_origin: [], api_key: None, watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_edge: None, tokenizer_config_path: None, disable_grammar_support: false, env: true, max_client_batch_size: 1, lora_adapters: None, usage_stats: Off, payload_limit: 2000000, enable_prefill_logprobs: false, } 2025-03-13T08:36:40.043396Z INFO text_generation_launcher: Using attention flashinfer - Prefix caching False 2025-03-13T08:36:40.043429Z INFO text_generation_launcher: Sharding model on 4 processes 2025-03-13T08:36:40.043433Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32] 2025-03-13T08:36:40.043785Z INFO download: text_generation_launcher: Starting check and download process for google/gemma-3-27b-it 2025-03-13T08:36:43.498233Z INFO text_generation_launcher: Files are already present on the host. Skipping download. 2025-03-13T08:36:44.060714Z INFO download: text_generation_launcher: Successfully downloaded weights for google/gemma-3-27b-it 2025-03-13T08:36:44.061471Z INFO shard-manager: text_generation_launcher: Starting shard rank=0 2025-03-13T08:36:44.590395Z INFO shard-manager: text_generation_launcher: Starting shard rank=1 2025-03-13T08:36:45.196166Z INFO shard-manager: text_generation_launcher: Starting shard rank=2 2025-03-13T08:36:45.867258Z INFO shard-manager: text_generation_launcher: Starting shard rank=3 2025-03-13T08:36:47.973482Z INFO text_generation_launcher: Using prefix caching = False 2025-03-13T08:36:47.973534Z INFO text_generation_launcher: Using Attention = flashinfer 2025-03-13T08:36:54.083888Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:36:54.609747Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:36:55.216572Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:36:55.888966Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:04.091352Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:04.617169Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:05.224253Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:05.896938Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:14.098533Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:14.624769Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:15.231953Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:15.904796Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:24.105963Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:24.632677Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:25.239656Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:25.912803Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:34.113333Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:34.641461Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:35.247092Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:35.920604Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:44.120842Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:44.649364Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:45.254347Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:45.928487Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:37:54.128489Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:37:54.657147Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:37:55.261709Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:37:55.936555Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:38:04.135901Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:38:04.664958Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:38:05.269205Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:38:05.944561Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:38:14.143354Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0 2025-03-13T08:38:14.672706Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1 2025-03-13T08:38:15.276730Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2 2025-03-13T08:38:15.952321Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3 2025-03-13T08:38:18.500055Z INFO text_generation_launcher: Using prefill chunking = False 2025-03-13T08:38:19.085091Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1 2025-03-13T08:38:19.176301Z INFO shard-manager: text_generation_launcher: Shard ready in 94.574638951s rank=1 2025-03-13T08:38:21.300395Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2 2025-03-13T08:38:21.301426Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0 2025-03-13T08:38:21.301937Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3 2025-03-13T08:38:21.348798Z INFO shard-manager: text_generation_launcher: Shard ready in 97.272539231s rank=0 2025-03-13T08:38:21.356498Z INFO shard-manager: text_generation_launcher: Shard ready in 95.475191243s rank=3 2025-03-13T08:38:21.385097Z INFO shard-manager: text_generation_launcher: Shard ready in 96.176034962s rank=2 2025-03-13T08:38:22.958763Z INFO text_generation_launcher: Starting Webserver 2025-03-13T08:38:23.126019Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model 2025-03-13T08:38:23.330948Z INFO text_generation_launcher: Using optimized Triton indexing kernels. 2025-03-13T08:38:25.345859Z ERROR text_generation_launcher: Method Warmup encountered an error. Traceback (most recent call last): File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup _, _batch, _ = self.generate_token(batch) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token out, speculative_logits = self.forward(batch, adapter_data) File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward logits, speculative_logits = self.model.forward( File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward hidden_states = self.text_model.model( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward hidden_states, residual = layer( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward attn_output = self.self_attn( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward attn_output = F.scaled_dot_product_attention( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 3 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342359 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/src/.venv/bin/text-generation-server", line 10, in <module> sys.exit(app()) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__ return get_command(self)(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__ return self.main(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main return _main( File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main rv = self.invoke(ctx) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke return __callback(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper return callback(**use_params) File "/usr/src/server/text_generation_server/cli.py", line 119, in serve server.serve( File "/usr/src/server/text_generation_server/server.py", line 315, in serve asyncio.run( File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete self.run_forever() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever self._run_once() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once handle._run() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run self._context.run(self._callback, *self._args) File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method return await self.intercept( > File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept return await response File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor raise error File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor return await behavior(request_or_iterator, context) File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup self.model.warmup(batch, max_input_tokens, max_total_tokens) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup raise RuntimeError( RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.349736Z ERROR text_generation_launcher: Method Warmup encountered an error. Traceback (most recent call last): File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup _, _batch, _ = self.generate_token(batch) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token out, speculative_logits = self.forward(batch, adapter_data) File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward logits, speculative_logits = self.model.forward( File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward hidden_states = self.text_model.model( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward hidden_states, residual = layer( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward attn_output = self.self_attn( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward attn_output = F.scaled_dot_product_attention( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 1 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342101 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/src/.venv/bin/text-generation-server", line 10, in <module> sys.exit(app()) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__ return get_command(self)(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__ return self.main(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main return _main( File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main rv = self.invoke(ctx) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke return __callback(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper return callback(**use_params) File "/usr/src/server/text_generation_server/cli.py", line 119, in serve server.serve( File "/usr/src/server/text_generation_server/server.py", line 315, in serve asyncio.run( File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete self.run_forever() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever self._run_once() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once handle._run() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run self._context.run(self._callback, *self._args) File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method return await self.intercept( > File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept return await response File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor raise error File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor return await behavior(request_or_iterator, context) File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup self.model.warmup(batch, max_input_tokens, max_total_tokens) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup raise RuntimeError( RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.350178Z ERROR text_generation_launcher: Method Warmup encountered an error. Traceback (most recent call last): File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup _, _batch, _ = self.generate_token(batch) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token out, speculative_logits = self.forward(batch, adapter_data) File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward logits, speculative_logits = self.model.forward( File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward hidden_states = self.text_model.model( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward hidden_states, residual = layer( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward attn_output = self.self_attn( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward attn_output = F.scaled_dot_product_attention( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 2 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342216 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/src/.venv/bin/text-generation-server", line 10, in <module> sys.exit(app()) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__ return get_command(self)(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__ return self.main(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main return _main( File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main rv = self.invoke(ctx) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke return __callback(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper return callback(**use_params) File "/usr/src/server/text_generation_server/cli.py", line 119, in serve server.serve( File "/usr/src/server/text_generation_server/server.py", line 315, in serve asyncio.run( File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete self.run_forever() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever self._run_once() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once handle._run() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run self._context.run(self._callback, *self._args) File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method return await self.intercept( > File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept return await response File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor raise error File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor return await behavior(request_or_iterator, context) File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup self.model.warmup(batch, max_input_tokens, max_total_tokens) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup raise RuntimeError( RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.350698Z ERROR text_generation_launcher: Method Warmup encountered an error. Traceback (most recent call last): File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup _, _batch, _ = self.generate_token(batch) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token out, speculative_logits = self.forward(batch, adapter_data) File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward logits, speculative_logits = self.model.forward( File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward hidden_states = self.text_model.model( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward hidden_states, residual = layer( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward attn_output = self.self_attn( File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward attn_output = F.scaled_dot_product_attention( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 0 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342032 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/src/.venv/bin/text-generation-server", line 10, in <module> sys.exit(app()) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__ return get_command(self)(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__ return self.main(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main return _main( File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main rv = self.invoke(ctx) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke return __callback(*args, **kwargs) File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper return callback(**use_params) File "/usr/src/server/text_generation_server/cli.py", line 119, in serve server.serve( File "/usr/src/server/text_generation_server/server.py", line 315, in serve asyncio.run( File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete self.run_forever() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever self._run_once() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once handle._run() File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run self._context.run(self._callback, *self._args) File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method return await self.intercept( > File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept return await response File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor raise error File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor return await behavior(request_or_iterator, context) File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup self.model.warmup(batch, max_input_tokens, max_total_tokens) File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup raise RuntimeError( RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.358791Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.370414Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.381723Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` 2025-03-13T08:38:25.392642Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens` Error: Backend(Warmup(Generation("Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`"))) 2025-03-13T08:38:25.403245Z ERROR text_generation_launcher: Webserver Crashed 2025-03-13T08:38:25.403260Z INFO text_generation_launcher: Shutting down shards 2025-03-13T08:38:25.452182Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0 2025-03-13T08:38:25.452239Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0 2025-03-13T08:38:25.459966Z INFO shard-manager: text_generation_launcher: Terminating shard rank=3 2025-03-13T08:38:25.462190Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=3 2025-03-13T08:38:25.481703Z INFO shard-manager: text_generation_launcher: Terminating shard rank=1 2025-03-13T08:38:25.481742Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=1 2025-03-13T08:38:25.488581Z INFO shard-manager: text_generation_launcher: Terminating shard rank=2 2025-03-13T08:38:25.488620Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=2 2025-03-13T08:38:25.862773Z INFO shard-manager: text_generation_launcher: shard terminated rank=3 2025-03-13T08:38:27.053688Z INFO shard-manager: text_generation_launcher: shard terminated rank=0 2025-03-13T08:38:27.290200Z INFO shard-manager: text_generation_launcher: shard terminated rank=2 2025-03-13T08:38:27.583555Z INFO shard-manager: text_generation_launcher: shard terminated rank=1 Error: WebserverFailed
2hard
Title: List index out of range in validation.py Body: I just wanna apologize beforehand if this issue is a bit incoherent. I'm not yet 100% sure what is happening or exactly how to reproduce it but at least I've seen it happen so I'll try to just explain that instead. ### Expected Behavior No exception happening ### Actual Behavior I've gotten this stacktrace (but only sometimes) ```pytb [2019-05-28 09:18:24,962] ERROR in patch: list index out of range Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/eve/methods/patch.py", line 179, in patch_internal updates, object_id, original, normalize_document File "/usr/local/lib/python3.7/site-packages/eve/validation.py", line 44, in validate_update document, update=True, normalize=normalize_document File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 990, in validate self.__normalize_mapping(self.document, self.schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 671, in __normalize_mapping self.__normalize_containers(mapping, schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 757, in __normalize_containers self.__normalize_sequence_per_schema(field, mapping, schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 826, in __normalize_sequence_per_schema result = validator.normalized(document, always_return_document=True) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 646, in normalized self.__normalize_mapping(self.document, self.schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 671, in __normalize_mapping self.__normalize_containers(mapping, schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 748, in __normalize_containers self.__normalize_mapping_per_schema(field, mapping, schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 812, in __normalize_mapping_per_schema result_value = validator.normalized(mapping[field], always_return_document=True) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 646, in normalized self.__normalize_mapping(self.document, self.schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 669, in __normalize_mapping self.__normalize_default_fields(mapping, schema) File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 922, in __normalize_default_fields self._normalize_default(mapping, schema, field) File "/usr/local/lib/python3.7/site-packages/eve/validation.py", line 72, in _normalize_default challenge = challenge[sub_field] IndexError: list index out of range ``` I have been digging through Eve code and added a few prints in validation.py trying to figure out what is going on and am pretty sure something might be confused around how Eve handles the persisted_document field in validation.py when I'm patching a document (in a maybe specific way). What I have in my schema is a field that is a list of dictionaries. - First POST added one item to this list, so far all is good. - Next PATCH when I try to update my list to have two things in it instead the above exception happens. My theory here is that somewhere in validation (normalization) logic Eve gets the keys from the new data (keys for items 0 and 1 in list in request payload for the PATCH request) but somehow it's trying to read data from the persisted_document (which by this time, in the middle of the PATCH request is still only one item long) As I explained above, I have added prints in validation.py and seen that the "challenge" object on line 72 is a list of one item instead of two, I also see in the code that challenge is initialized from whatever is in persisted_document. I'm not sure if I understand Eve code well enough (yet) but shouldn't this code be dealing with the document from the request instead of the persisted one?
1medium
Title: Will Keras welcome new backend contributions? Body: Suppose we could develop a brand-new backend for Keras, such as [Paddle](https://github.com/PaddlePaddle/Paddle). Would Keras welcome our new backend contributions? In fact, I can understand that adding a new backend will increase the maintenance burden of the keras-team. Therefore, I would like to ask for keras-team's opinion.
3misc
Title: In API Update, make it possible to set specific channels Body: In the API currently we're only able to set all channels or no channels on a check. We need to be able to set specific channels programmatically through the API, and not only through the Web-UI. PS. If this already works, the API-Docs need to be updated with new information regarding how this should be done.
1medium
Title: Please provide evidence for performance claims Body: The readme claims "Typical speedups over Python are on the order of 10-100x or more, on a single thread." Where do these numbers come from? Please provide the benchmarks used, and any additional information needed to reproduce the result.
1medium
Title: Upgrade Visions library Body: ### Missing functionality Visions 0.75 installs a dependency that consume 1.8Gb of hard disk space, in the new version of visions this library is removed. ### Proposed feature Upgrade to the latest visions version ### Alternatives considered _No response_ ### Additional context _No response_
1medium
Title: How to use BERT model to fine-tune a cloze-style task? Body: I mean,not just use BERT model to predict answers,also train it.
1medium
Title: [feature] When viewing a diff with multiple changes since last view, default to the last viewed date Body: **Version and OS** v0.45.5 **Is your feature request related to a problem? Please describe.** When I see a bolded watch in the watch overview, I click it and it autoscrolls to the diff text. The issue is it only includes the diff between the latest two copies of the watch, and multiple changes may have occurred since my last view, since the diff date selected is scrolled out of frame, it may not be obvious that other changes are hidden between separate diff dates. **Describe the solution you'd like** Since it seems last_viewed is stored in the database, it should be possible to select the left side comparison to be the newest history item since the last_view date and diff it against the most recent version. **Describe the use-case and give concrete real-world examples** Page A is set to be checked every hour and it does change every hour. 12 hours go by since the user last viewed the diff, and the default diff link only shows changes from the last hour, the user has to scroll up to select the last date.
1medium
Title: send_file()方法在发送.docx文件的时候,收到的文件为什么ERROR_MESSAGE_MAIN Body: 在用send_file()方法的时候,不是发不出去:报错: Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/api/chats/chat.py", line 54, in wrapped ret = do_send() File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 72, in wrapped smart_map(check_response_body, ret) File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 207, in smart_map return func(i, *args, **kwargs) File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 53, in check_response_body raise ResponseError(err_code=err_code, err_msg=err_msg) wxpy.exceptions.ResponseError: err_code: 1; err_msg: >>> 就是收到的文件打开上面是ERROR_MESSAGE_MAIN
1medium
Title: ModelSchema with lower camel case results? Body: **Is your feature request related to a problem? Please describe.** I have Django models that are snake case, as is traditional (e.g., full_name). The client already specified lower camel case (e.g., fullName). So, every API call I have, I want to use lower camel case. **Describe the solution you'd like** I'd like to be able to use alias_generator=to_lower_camel. This sort of code does not currently work for me: ``` from pydantic.utils import to_lower_camel from .models import MyObj from ninja import ModelSchema class MyObjSchema(ModelSchema): class Config: model = MyObj model_fields = ["id", "name", "full_name"] # make names lower camel case alias_generator = to_lower_camel ``` It results in `full_name: null` in all the results. If I comment out the alias_generator, it returns `full_name` populated, so I know this is the alias_generator.
1medium
Title: Readme outdated video Body: The video of Queen Elisabeth in the readme is private. Please remove! THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS POST ONLY ISSUES RELATED TO BUGS OR CODE ## Expected behavior *Describe, in some detail, what you are trying to do and what the output is that you expect from the program.* ## Actual behavior *Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.* ## Steps to reproduce *Describe, in some detail, the steps you tried that resulted in the behavior described above.* ## Other relevant information - **Command lined used (if not specified in steps to reproduce)**: main.py ... - **Operating system and version:** Windows, macOS, Linux - **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary)
0easy
Title: Bug: Linear-Sentry Integration Not Working Despite Successful Activation Body: ### Environment SaaS (https://sentry.io/) ### Steps to Reproduce I'll add that important error message to the bug report: ## Bug: Linear-Sentry Integration Not Working Despite Successful Activation ### Description I successfully activated the Sentry integration through Linear (confirmation received), but I'm unable to create Linear tickets from Sentry. The issue persists across multiple Linear workspaces (both EU and US hosted instances). When attempting to use the integration, an error message appears stating "Unable to connect to Linear." ### Steps to Reproduce 1. Initiated the Sentry integration workflow from Linear 2. Received confirmation of successful integration activation 3. Attempted to create Linear tickets from Sentry 4. Observed error message: "Unable to connect to Linear" 5. Ticket creation fails ### Environment Details - Linear workspaces tested: 2 (one EU-hosted, one US-hosted) ### Additional Information - The integration appears as active in both Linear and Sentry interfaces - The error occurs consistently across different Linear workspaces ![Image](https://github.com/user-attachments/assets/31db09bb-f5df-4c4d-afa8-421b9c220763) ![Image](https://github.com/user-attachments/assets/561179a6-4401-4d64-bf9a-e71aa7b7f1ad) ![Image](https://github.com/user-attachments/assets/336eb29c-4dd9-44a2-8195-fd618c6d13bb) ![Image](https://github.com/user-attachments/assets/bb505d6f-38b4-443b-9558-5c7c7bb8118c) ![Image](https://github.com/user-attachments/assets/f30641e9-5e38-42ef-8609-0e7bb75193b5) ### Expected Result Should be able to create Linear tickets directly from Sentry after successful integration activation. ### Actual Result Unable to create Linear tickets from Sentry despite the integration showing as successfully activated. A specific error message "Unable to connect to Linear" is displayed when attempting to use the integration. ### Product Area Settings - Integrations ### Link _No response_ ### DSN _No response_ ### Version _No response_
1medium
Title: 关于版本更新的问题 Body: 首先感谢作者维护的开源项目 对比下, 这个库, 对获取股票数据, 真的很实用 版本更新的也很频繁, 特别感谢! 想提个意见, 因为版本更新的比较频繁: 请备注下, 版本更新处理的问题 or 新加的功能 这样, 可以让用户选择是否更新库, 感谢作者!
3misc
Title: boost logging with rich Body:
1medium
Title: Subclassing generic dataclass loses type of parent class fields Body: ### Initial Checks - [x] I confirm that I'm using Pydantic V2 ### Description This might be a non-issue if dataclasses are not intended to be subclassed. When extending a generic dataclass, the fields from the superclass have no validation done on them. ```python @dataclass class A(Generic[T]): a: T @dataclass class B(A[U]): b: U ``` I would expect here that deserialising something to `B[int]` would validate that `a` was an `int` but this doesn't appear to happen. ### Example Code ```Python from pydantic.dataclasses import dataclass from pydantic import TypeAdapter from typing import TypeVar, Generic T = TypeVar('T') U = TypeVar('U') @dataclass class A(Generic[T]): a: T @dataclass class B(A[U]): b: U b = TypeAdapter(B[int]).validate_python({"a": ["not", "an", "int"], "b": "42"}) # ^^^^^^^^^^^^^^^ # This does not fail even though the type of a is not valid print(b) assert b.b == 42 # <- Passes as "42" has been converted to an int correctly assert type(b.a) == int # <- This fails as a is a list ``` ### Python, Pydantic & OS Version ```Text pydantic version: 2.10.6 pydantic-core version: 2.27.2 pydantic-core build: profile=release pgo=false install path: /path/to/.venv/lib/python3.12/site-packages/pydantic python version: 3.12.7 (main, Oct 8 2024, 00:20:25) [Clang 18.1.8 ] platform: Linux-6.6.72-1-lts-x86_64-with-glibc2.40 related packages: pydantic-settings-2.7.1 fastapi-0.115.7 pyright-1.1.392.post0 typing_extensions-4.12.2 commit: unknown ```
1medium
Title: `single_cls` training dies quietly during 1st epoch Body: ### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Train ### Bug I have a dataset consisting of ~95 categories that I'm trying to train a binary object detection model on for my tagging utility for it to automatically recommend bounding boxes. I'm utilizing the `single_cls` command line option to "flatten" my training data into a single class for classification. When running the training session on a `Yolo11n` model, the training process dies quietly while training the first epoch. I can confirm running training on the same dataset, using the same base model, without the `single_cls=true` option completes training successfully. ### Environment ``` Ultralytics 8.3.94 🚀 Python-3.12.3 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 3090, 24253MiB) Setup complete ✅ (32 CPUs, 31.3 GB RAM, 133.3/1831.7 GB disk) OS Linux-6.8.0-55-generic-x86_64-with-glibc2.39 Environment Linux Python 3.12.3 Install pip Path /home/enusbaum/yolo/venv/lib/python3.12/site-packages/ultralytics RAM 31.26 GB Disk 133.3/1831.7 GB CPU AMD Ryzen 9 5950X 16-Core Processor CPU count 32 GPU NVIDIA GeForce RTX 3090, 24253MiB GPU count 1 CUDA 12.6 numpy ✅ 2.1.1<=2.1.1,>=1.23.0 matplotlib ✅ 3.10.1>=3.3.0 opencv-python ✅ 4.11.0.86>=4.6.0 pillow ✅ 11.0.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.15.2>=1.4.1 torch ✅ 2.6.0+cu126>=1.8.0 torch ✅ 2.6.0+cu126!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.21.0+cu126>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 7.0.0 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.14>=2.0.0 ``` ### Minimal Reproducible Example ``` yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true ``` ### Additional While I can see the Python processes are still running: ``` enusbaum 2302 5.6 2.1 13017132 696784 pts/1 Sl 11:32 0:31 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2303 5.5 2.1 13015644 707028 pts/1 Sl 11:32 0:30 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2304 5.4 2.1 13017552 714692 pts/1 Sl 11:32 0:29 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2324 3.9 0.2 13093984 91624 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2327 2.7 0.2 13097044 92048 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2328 3.8 0.2 13094032 90900 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2329 3.8 0.2 13094044 92320 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2334 2.8 0.2 13097128 92176 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2335 3.8 0.2 13094116 92028 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2336 3.7 0.2 13094128 91428 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2338 3.9 0.2 13094152 91688 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2341 2.7 0.2 13097212 92076 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2342 3.7 0.2 13094200 91976 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true enusbaum 2343 3.8 0.2 13094212 90960 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true ``` They seem to be dead as they're not consuming any CPU, and are still holding on to GPU memory: ``` PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2302 enusbaum 20 0 12.4g 696784 94344 S 0.0 2.1 0:31.06 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2303 enusbaum 20 0 12.4g 707028 96184 S 0.0 2.2 0:30.38 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2304 enusbaum 20 0 12.4g 714692 96904 S 0.0 2.2 0:29.89 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2324 enusbaum 20 0 12.5g 91624 75944 S 0.0 0.3 0:21.43 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2327 enusbaum 20 0 12.5g 92048 76424 S 0.0 0.3 0:15.26 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2328 enusbaum 20 0 12.5g 90900 75912 S 0.0 0.3 0:21.05 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2329 enusbaum 20 0 12.5g 92320 76168 S 0.0 0.3 0:20.93 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2334 enusbaum 20 0 12.5g 92176 75912 S 0.0 0.3 0:15.43 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2335 enusbaum 20 0 12.5g 92028 76168 S 0.0 0.3 0:21.06 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2336 enusbaum 20 0 12.5g 91428 75912 S 0.0 0.3 0:20.39 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2338 enusbaum 20 0 12.5g 91688 75912 S 0.0 0.3 0:21.45 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2341 enusbaum 20 0 12.5g 92076 75912 S 0.0 0.3 0:15.17 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2342 enusbaum 20 0 12.5g 91976 75912 S 0.0 0.3 0:20.60 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + 2343 enusbaum 20 0 12.5g 90960 75656 S 0.0 0.3 0:20.81 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train + ``` ``` +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3090 Off | 00000000:0A:00.0 Off | N/A | | 30% 44C P2 109W / 350W | 17529MiB / 24576MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| +-----------------------------------------------------------------------------------------+ ``` ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
1medium
Title: [next_ui] Jobs limit query display filter Body: ### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Feature type Enhancement to Existing Feature ### Feature Summary Hello, We mostly use AWX Job template provisoning callback. We should have way to filter and display the `limit` info in `Jobs` UI. I can see `limit` info inf job detail API `/api/v2/jobs/job_id` Please check image below for it. ![2024-09-12 11_48_44-AWX](https://github.com/user-attachments/assets/cc882a63-554e-483c-9392-c21400479f8b) Thanks ### Select the relevant components - [X] UI - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Steps to reproduce in Jobs UI, there isn't way to display limit info ### Current results Can't filter `limit` info ### Sugested feature result we should have way to filter/query `Limit` ### Additional information _No response_
1medium
Title: AttributeError: 'tuple' object has no attribute 'expandtabs' Body: I'm getting the following error when running `python -m gpt_engineer.main`. I'm using python 3.11/ ``` File "/opt/miniconda3/envs/gpt-eng/lib/python3.11/inspect.py", line 873, in cleandoc lines = doc.expandtabs().split('\n') ^^^^^^^^^^^^^^ AttributeError: 'tuple' object has no attribute 'expandtabs' ```
1medium
Title: 预训练模型替换后不匹配救助 Body: 预训练模型替换成pretrained-11-7-21_75k大佬提供的出现不匹配。 Found 266 samples +----------------+------------+---------------+------------------+ | Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) | +----------------+------------+---------------+------------------+ | 85k Steps | 12 | 5e-06 | 2 | +----------------+------------+---------------+------------------+ E:\MockingBird\synthesizer\synthesizer_dataset.py:84: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_new.cpp:248.) embeds = torch.tensor(embeds) E:\MockingBird\synthesizer\synthesizer_dataset.py:84: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_new.cpp:248.) embeds = torch.tensor(embeds) Traceback (most recent call last): File "synthesizer_train.py", line 37, in <module> train(**vars(args)) File "E:\MockingBird\synthesizer\train.py", line 208, in train optimizer.step() File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 141, in step adam( File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 281, in adam func(params, File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 446, in _multi_tensor_adam torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1) RuntimeError: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3 **Env & To Reproduce[复现与环境]** 描述你用的环境、代码版本、模型 Python 3.8.16 pretrained **Screenshots[截图(如有)]** If applicable, add screenshots to help
1medium
Title: Disabling queue will display only the first character in chatbot streaming Body: ### Describe the bug I want to disable queue for bot() in chatbot streaming - https://www.gradio.app/docs/gradio/chatbot#demos - chatbot_streaming ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr import random import time with gr.Blocks() as demo: chatbot = gr.Chatbot(type="messages") msg = gr.Textbox() clear = gr.Button("Clear") def user(user_message, history: list): return "", history + [{"role": "user", "content": user_message}] def bot(history: list): bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"]) history.append({"role": "assistant", "content": ""}) for character in bot_message: history[-1]['content'] += character time.sleep(0.05) yield history # OK # msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( # bot, chatbot, chatbot # ) # NG msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( bot, chatbot, chatbot, queue=False ) clear.click(lambda: None, None, chatbot, queue=False) if __name__ == "__main__": demo.launch(debug=True) ``` ### Screenshot OK | NG -- | -- ![image](https://github.com/user-attachments/assets/85360dc6-b094-400e-a9fe-7def7567b601) | ![image](https://github.com/user-attachments/assets/69062c4d-1f54-4170-9ff5-179b57fd7e2b) ### Logs ```shell * Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. ``` ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Windows gradio version: 5.9.1 gradio_client version: 1.5.2 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.7.0 audioop-lts is not installed. fastapi: 0.115.6 ffmpy: 0.4.0 gradio-client==1.5.2 is not installed. httpx: 0.28.1 huggingface-hub: 0.27.0 jinja2: 3.1.4 markupsafe: 2.1.5 numpy: 2.2.0 orjson: 3.10.12 packaging: 24.2 pandas: 2.2.3 pillow: 11.0.0 pydantic: 2.10.3 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.8.3 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.41.3 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.2.3 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.10.0 httpx: 0.28.1 huggingface-hub: 0.27.0 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.1 ``` ### Severity Blocking usage of gradio
1medium
Title: [Bug] Spleeter crashes at "KMP_AFFINITY" Body: ## Description On some songs Spleeter decides to exit too early, rendering nothing. ## Step to reproduce It's hard to get provide instructions due to potential piracy. I will not share the song I'm trying to process, but basically load a song and it exits at "Affinity" something. ## Output ``` INFO:spleeter:Audio data loaded successfully INFO:spleeter:Audio data loaded successfully OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-11 OMP: Info #213: KMP_AFFINITY: decoding x2APIC ids. OMP: Info #276: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info OMP: Info #156: KMP_AFFINITY: 12 available OS procs OMP: Info #157: KMP_AFFINITY: Uniform topology OMP: Info #191: KMP_AFFINITY: 1 socket x 6 cores/socket x 2 threads/core (6 total cores) OMP: Info #215: KMP_AFFINITY: OS proc to physical thread map: OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to socket 0 core 0 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to socket 0 core 0 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to socket 0 core 1 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to socket 0 core 1 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to socket 0 core 2 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to socket 0 core 2 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to socket 0 core 3 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to socket 0 core 3 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 8 maps to socket 0 core 4 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 9 maps to socket 0 core 4 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 10 maps to socket 0 core 5 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 11 maps to socket 0 core 5 thread 1 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 16216 thread 0 bound to OS proc set 0 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 23460 thread 1 bound to OS proc set 2 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 8744 thread 2 bound to OS proc set 4 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19660 thread 3 bound to OS proc set 6 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 18976 thread 4 bound to OS proc set 8 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26180 thread 5 bound to OS proc set 10 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 5280 thread 6 bound to OS proc set 1 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26188 thread 7 bound to OS proc set 3 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26176 thread 8 bound to OS proc set 5 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 10776 thread 9 bound to OS proc set 7 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 22556 thread 10 bound to OS proc set 9 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 1656 thread 11 bound to OS proc set 11 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 5664 thread 13 bound to OS proc set 2 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26120 thread 14 bound to OS proc set 4 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 17960 thread 12 bound to OS proc set 0 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 17492 thread 15 bound to OS proc set 6 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24596 thread 17 bound to OS proc set 10 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 9968 thread 16 bound to OS proc set 8 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 4584 thread 18 bound to OS proc set 1 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 14316 thread 19 bound to OS proc set 3 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 21612 thread 20 bound to OS proc set 5 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19120 thread 21 bound to OS proc set 7 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25808 thread 22 bound to OS proc set 9 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25704 thread 23 bound to OS proc set 11 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 18908 thread 24 bound to OS proc set 0 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25836 thread 25 bound to OS proc set 2 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24272 thread 26 bound to OS proc set 4 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 22248 thread 27 bound to OS proc set 6 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24932 thread 28 bound to OS proc set 8 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 20220 thread 29 bound to OS proc set 10 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19272 thread 30 bound to OS proc set 1 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 14016 thread 31 bound to OS proc set 3 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 10340 thread 32 bound to OS proc set 5 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3204 thread 33 bound to OS proc set 7 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3996 thread 34 bound to OS proc set 9 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 20376 thread 35 bound to OS proc set 11 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 7404 thread 36 bound to OS proc set 0 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3736 thread 38 bound to OS proc set 4 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 21108 thread 37 bound to OS proc set 2 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25812 thread 40 bound to OS proc set 8 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 15844 thread 39 bound to OS proc set 6 OMP: Info #251: KMP_AFFINITY: pid 12040 tid 6948 thread 41 bound to OS proc set 10 ``` ## Environment <!-- Fill the following table --> | | | | ----------------- | ------------------------------- | | OS | Windows 10 | | Installation type | Conda | | RAM available | 16GB | ## Additional context It's weird that it works on some songs but not on others.
2hard
Title: Crash in wsl Body: The script crashes when executed in WSL because it does not have access to the pc hardware. Please fix or dont use `playsound`.
1medium
Title: QST: Does the project consider DataFrame.query() arbitrary code execution to be a security vulnerability? Body: ### Research - [X] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions. - [X] I have asked my usage related question on [StackOverflow](https://stackoverflow.com). ### Link to question on StackOverflow https://stackoverflow.com/questions/79304226/should-i-manually-patch-the-pandas-dataframe-query-vulnerability-or-wait-for-a (To clarify, this question was written by another user.) ### Question about pandas Hi, I saw [this question on StackOverflow](https://stackoverflow.com/questions/79304226/should-i-manually-patch-the-pandas-dataframe-query-vulnerability-or-wait-for-a), which is about a public CVE, [CVE-2024-9880](https://huntr.com/bounties/a49baae1-4652-4d6c-a179-313c21c41a8d). The basic premise of the CVE is that if an attacker controls the `expr` argument to DataFrame.query(), then arbitrary code execution can be achieved. The example given in the CVE is <details> ```python import pandas as pd df = pd.DataFrame({'a': [1, 2, 3], 'b': ['error_details', 'confidential_info', 'normal']}) query = '@pd.core.frame.com.builtins.__import__("os").system("""ping google.com #""")' try: engine = "python" result = df.query(query,local_dict={},engine="python",).index except Exception as e: print(f'Error: {e}') ``` </details> However, this is not minimal, and a more minimal construction would be ```python import pandas as pd df = pd.DataFrame() expr = '@pd.compat.os.system("""echo foo""")' result = df.query(expr, engine='python') ``` (The report also says that `engine='python'` is required, but both `engine='python'` and `engine='numexpr'` worked in my testing.) My question is about Pandas's security model. What security guarantees does Pandas make about DataFrame.query() with an attacker-controlled `expr`? My intuition about this is "none, don't do that," but I'm wondering what the Pandas project thinks.
2hard
Title: how using the .txt files instead of xml? Body: i was study by darknet, but now i try the darkflow. in darknet, my custom image dataset have annotation file format is txt. it is composed: "object-class" "x" "y" "width" "height" so my question is how can i using this txt file? if i can't this, how can i convert this file to the .xml files? many thanks,
1medium
Title: Stop support for Python 3.8 Body: Stop supporting version 3.8 of Python.
2hard
Title: Error AttributeError: 'AnonymousUser' Body: I Have this trouble when I Open my documentation: view's MyViewSet raised exception during schema generation; use `getattr(self, 'swagger_fake_view ', False)` to detect and short-circuit this line 33, in get_queryset return MyModel.objects.filter(container=self.request.user.container) AttributeError: 'AnonymousUser' object has no attribute 'container' Somebody know how to solve this please?
1medium
Title: Access to TFRC Body: Hello all Dears, How can we access free tpu via TFRC for pre-training BERT language Model on a specific language? the link below explains that we have to sign up here: https://services.google.com/fb/forms/tpusignup/ , but it seems that it's a dead URL. https://ai.googleblog.com/2017/05/introducing-tensorflow-research-cloud.html Nothing happened By Apply now at : https://www.tensorflow.org/tfrc
3misc
Title: What models or data sets are used for object detection Body: ### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question The following picture is an example, but no documentation was found ![image](https://github.com/roboflow/supervision/assets/34231503/a5c9d548-a641-43ca-b9e0-9aff26aee6b9) ### Additional _No response_
3misc
Title: pip install arcgis==2.0.1 fails on MacOS Body: **Describe the bug** Cannot install _arcgis_ Python SDK v2.0.1 on MacOS using _pip install .._ which makes server installs using _requirements.txt_ difficult (_arcgis_ Python SDK v2.0.0 installs just fine). **To Reproduce** Steps: ``` pip install arcgis==2.0.1 ``` Error: ``` pip install arcgis==2.0.1 ERROR: Ignored the following versions that require a different python version: 2.0.1 Requires-Python >=3.7, <3.10 ERROR: Could not find a version that satisfies the requirement arcgis==2.0.1 (from versions: 1.3.0, 1.3.0.post1, 1.3.0.post2, 1.4.0, 1.4.1, 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.2.post1, 1.5.3, 1.6.0, 1.6.1, 1.6.1.post1, 1.6.2, 1.6.2.post1, 1.7.0, 1.7.1, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.5.post1, 1.8.5.post2, 1.8.5.post3, 1.9.0, 1.9.1, 2.0.0) ERROR: No matching distribution found for arcgis==2.0.1 ``` **Expected behavior** A clear and concise description of what you expected to happen. **Platform (please complete the following information):** - OS: MacOS - Python API Version [e.g. `2.0.1`] **Additional context** `pip install arcgis=2.0.0` works just fine. Related: 1. https://github.com/Esri/arcgis-python-api/issues/1299
1medium
Title: Using bert for Document Classification Body: How can i use BERT to fine tune for document classifications? Has anyone implemented it? Any example or lead would be really helpful. I want to use it for document which are way bigger than current max length(512 tokens).
1medium
Title: run_pretraining.py ignores undefined flags. Body: Hi, I'm trying to make bert model. I found that **`run_pretraining.py` ignores undefined flags**. for example, If I do like this, ``` python3 bert/run_pretraining.py \ --input_file=gs://input_dir/*.tfrecord \ --output_dir=gs://output_dir/ \ --do_train=True \ --do_eval=True \ --bert_config_file=./bert_config.json \ --train_batch_size=1024 \ --max_seq_length=128 \ --max_predictions_per_seq=20 \ --num_train_steps=1000000 \ --num_warmup_steps=10000 \ --learning_rate=5e-5 \ --save_checkpoints_steps=10000 \ --init_checkpoints=340000 \ --use_tpu=True \ --tpu_name=tpu2 \ --tpu_zone=us-central1-f \ --fake=undifined_flags --gcp_project=my_project \ --num_tpu_cores=8 ``` **I included an undefined arg** **`--fake=undifined_flags`** I think It should throw out an error, but It doesn't. It train well. (Maybe) There is no problem in progress. TPU was connected normally, and ckpt was made normally too. And fine tuning result of ckpt is not bad. Why it doesn't throw out error? Why it works?
1medium
Title: How to control /socket.io/ endpoint Body: Hello, This is less of an issue and more of a question, but is it possible to control the `/socket.io/` endpoint? I am using `Flask-SocketIO` for authenticated users only, using the `authenticated_only` function wrapper: ```def authenticated_only(f): @functools.wraps(f) def wrapped(*args, **kwargs): if not current_user.is_authenticated: disconnect() else: return f(*args, **kwargs) return wrapped ``` All `@socketio.on` handlers are making use of this wrapper, including `@socketio.on('connect')`. If an unauthenticated user makes a direct `GET` request to my server to the `/socket.io/` endpoint, for example `/socket.io/?EIO=3&transport=polling&t=MqK2tJx` this request will time-out and eventually result in a 502 error (`socket.io` is behind `nginx`). But it seems to still be "stuck" in the async queue. Is it possible to control the `/socket.io/` and immediately return a `403` for unauthenticated users? Otherwise I eventually end up with errors as a core is "stuck" trying to manage this request `[DANGER] async queue is full !!!`
1medium
Title: [BUG]: <Provide a clear, descriptive title> Cannot generate resume: Chromedriver version not discovered Body: ### Describe the bug 2024-09-30 11:27:11.553 | ERROR | src.aihawk_easy_applier:_create_and_upload_resume:470 - Failed to generate resume: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered 2024-09-30 11:27:11.556 | ERROR | src.aihawk_easy_applier:_create_and_upload_resume:472 - Traceback: Traceback (most recent call last): File "/Users/maauri/Projects/Auto_jobs/linkedIn_auto_jobs_applier_with_AI/src/aihawk_easy_applier.py", line 442, in _create_and_upload_resume resume_pdf_base64 = self.resume_generator_manager.pdf_base64(job_description_text=job.description) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/manager_facade.py", line 81, in pdf_base64 pdf_base64 = HTML_to_PDF(temp_html_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/utils.py", line 25, in HTML_to_PDF driver = create_driver_selenium() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/utils.py", line 18, in create_driver_selenium return webdriver.Chrome(service=service, options=options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py", line 82, in __init__ service.path = DriverFinder.get_path(service, options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/driver_finder.py", line 43, in get_path raise err File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/driver_finder.py", line 40, in get_path path = shutil.which(service.path) or SeleniumManager().driver_location(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/selenium_manager.py", line 91, in driver_location result = self.run(args) ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/selenium_manager.py", line 112, in run raise SeleniumManagerException(f"Selenium Manager failed for: {command}.\n{result}{stderr}") selenium.common.exceptions.SeleniumManagerException: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered 2024-09-30 11:27:11.556 | ERROR | src.aihawk_easy_applier:fill_up:337 - Failed to find form elements: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered ### Steps to reproduce Run the program after cloning using python3 main.py ### Expected behavior The resume should have been generated ### Actual behavior Selects an already uploaded resume and proceeds ### Branch main ### Branch name _No response_ ### Python version 3.12.2 ### LLM Used OpenAI ### Model used GPT-4o-mini ### Additional context _No response_
1medium
Title: pandas 2.2: changed behaviour in `test_multi.py::test_concat5` Body: This test has started failing when pandas got upgraded to 2.2 last weekend: ``` FAILED dask/dataframe/tests/test_multi.py::test_concat5 - AssertionError: DataFrame are different DataFrame shape mismatch [left]: (14, 6) [right]: (14, 5) ```
1medium
Title: issues about scikit-learn Body: 1- in requirements.txt you have "scikit_learn" i think should scikit-learn. should change underscore "_" to hiffen "-". 2 - pyod is a indepencence of pycaret, i'm tryng making support to scikit-learn 1.4, but pyod have a conflict, please make support to scikit-learn 1.4 and release a new version thanks
1medium
Title: [BUG]move_rank is much slower than pd.rolling(window).rank() when window is a large value(>1000) Body: move_rank is much slower than pd.rolling(window).rank() when window is a large value(>1000)
2hard
Title: [Bug-App]: Invalid License json: cannot unmarshal string into Go struct field WeaveLimits.weaveLimits.weaveLimitBytes of type int64 Body: ### Describe the bug I installed wandb locally, I got a free license. ![Image](https://github.com/user-attachments/assets/6a6ca74d-48a8-4db2-9e69-cb5fbac61424) But when I click on update settings, I'm getting the error ``` Invalid License json: cannot unmarshal string into Go struct field WeaveLimits.weaveLimits.weaveLimitBytes of type int64 ``` ![Image](https://github.com/user-attachments/assets/6c6a7bf0-492e-453a-b844-c05b6b1bff9d) When I look at the payload, it's of type string: ![Image](https://github.com/user-attachments/assets/da2f2246-2454-49d3-8740-bd64a2d765fe) Can you fix that in your license ?
1medium
Title: question: how do I select the optimum number of resamples value to use? Body: Hi, It may be possible that the results of Sobol, Morris, Delta-Moment, and Derivative-based Global Sensitivity Measure depend on the used number of resamples value. If this is the case, is there a way to estimate the optimum value? For example, I have this problem definition: problem= { 'num_vars': 6, 'names': ['Amplitude', 'Bandwidth', 'Envelope', 'Instantaneous Frequency', 'Sweetness', 'Thin Bed'], 'bounds': [[min_amplitude, max_amplitude], [min_bandwidth, max_bandwidth], [min_envelope, max_envelope], [min_instantaneous_frequency, max_instantaneous_frequency], [min_sweetness, max_sweetness], [min_thin_bed, max_thin_bed]], 'distributions': ['norm', 'norm', 'norm', 'norm', 'norm', 'norm'] } The input and model data is in a 2D array of size 7344 x 7. The last column contains the model output. What would be the optimum value for the number of resamples? is there a formula that I can use? Many thanks, Ivan
1medium
Title: google-api-core error with lp.TesseractAgent(languages='eng') Body: **Describe the bug** code: `ocr_agent = lp.TesseractAgent(languages='eng')` <br> Error: ContextualVersionConflict: (google-api-core 2.11.0 (/usr/local/lib/python3.10/dist-packages), Requirement.parse('google-api-core[grpc]<2.0.0dev,>=1.14.0'), {'google-cloud-vision'}) **Checklist** 1. I have searched related issues but cannot get the expected help. 2. The bug has not been fixed in the latest version, see the [Layout Parser Releases](https://github.com/Layout-Parser/layout-parser/releases/) **To Reproduce** Steps to reproduce the behavior: 1. What command or script did you run? ``` ocr_agent = lp.TesseractAgent(languages='eng') ``` **Environment** 1. Platform: Google Colab
1medium
Title: pymysql Connection.ping AttributeError Body: ### Describe the bug sqlalchemy with PyMySQL 0.9.3 ping cause exception. ### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected _No response_ ### SQLAlchemy Version in Use 2.0.23 1.4.50 ### DBAPI (i.e. the database driver) pymysql ### Database Vendor and Major Version MySQL 8 ### Python Version 3.9 ### Operating system Linux ### To Reproduce ```python from sqlalchemy import create_engine from sqlalchemy import text SQLALCHEMY_DATABASE_URI='mysql+pymysql://xxxx:xxxx@xxxx:3306/mysql' engine = create_engine(SQLALCHEMY_DATABASE_URI, pool_pre_ping=True, pool_size=1) def test(): with engine.connect() as conn: conn.execute(text("select user from mysql.user limit 1")) for _ in range(2): test() ``` ### Error ``` Traceback (most recent call last): File "/workspace/test-env/../greatrds/.vscode/test.py", line 13, in <module> test() File "/workspace/test-env/../greatrds/.vscode/test.py", line 9, in test with engine.connect() as conn: File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3268, in connect return self._connection_cls(self) File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 145, in __init__ self._dbapi_connection = engine.raw_connection() File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3292, in raw_connection return self.pool.connect() File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 452, in connect return _ConnectionFairy._checkout(self) File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1378, in _checkout del fairy File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__ raise exc_value.with_traceback(exc_tb) File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1306, in _checkout result = pool._dialect._do_ping_w_event( File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 709, in _do_ping_w_event return self.do_ping(dbapi_connection) File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 104, in do_ping if self._send_false_to_ping: File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1146, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 93, in _send_false_to_ping insp = langhelpers.get_callable_argspec(Connection.ping) AttributeError: 'function' object has no attribute 'ping' ``` ### Additional context Package Version ---------- ------- greenlet 3.0.1 pip 21.2.4 PyMySQL 0.9.3 setuptools 58.1.0 SQLAlchemy 1.4.50
1medium
Title: WAN2.1 apply_group_offloading **ERROR** result Body: ### Describe the bug I am attempting to use the WAN 2.1 model from the diffusers library to complete an image-to-video task on an NVIDIA RTX 4090. To optimize memory usage, I chose the group offload method and intended to compare resource consumption across different configurations. However, during testing, I encountered two main issues: 1. When using the group_offload_leaf_stream method: I received warnings that some layers were not executed during the forward pass: ``` It seems like some layers were not executed during the forward pass. This may lead to problems when applying lazy prefetching with automatic tracing and lead to device-mismatch related errors. Please make sure that all layers are executed during the forward pass. The following layers were not executed: unexecuted_layers=['blocks.25.attn2.norm_added_q', 'blocks.10.attn2.norm_added_q', 'blocks.13.attn2.norm_added_q', 'blocks.11.attn2.norm_added_q', 'blocks.34.attn2.norm_added_q', 'blocks.0.attn2.norm_added_q', 'blocks.35.attn2.norm_added_q', 'blocks.33.attn2.norm_added_q', 'blocks.21.attn2.norm_added_q', 'blocks.20.attn2.norm_added_q', 'blocks.3.attn2.norm_added_q', 'blocks.7.attn2.norm_added_q', 'blocks.22.attn2.norm_added_q', 'blocks.14.attn2.norm_added_q', 'blocks.29.attn2.norm_added_q', 'blocks.9.attn2.norm_added_q', 'blocks.1.attn2.norm_added_q', 'blocks.37.attn2.norm_added_q', 'blocks.18.attn2.norm_added_q', 'blocks.30.attn2.norm_added_q', 'blocks.4.attn2.norm_added_q', 'blocks.32.attn2.norm_added_q', 'blocks.36.attn2.norm_added_q', 'blocks.26.attn2.norm_added_q', 'blocks.6.attn2.norm_added_q', 'blocks.38.attn2.norm_added_q', 'blocks.17.attn2.norm_added_q', 'blocks.12.attn2.norm_added_q', 'blocks.19.attn2.norm_added_q', 'blocks.16.attn2.norm_added_q', 'blocks.15.attn2.norm_added_q', 'blocks.28.attn2.norm_added_q', 'blocks.24.attn2.norm_added_q', 'blocks.31.attn2.norm_added_q', 'blocks.8.attn2.norm_added_q', 'blocks.5.attn2.norm_added_q', 'blocks.27.attn2.norm_added_q', 'blocks.2.attn2.norm_added_q', 'blocks.39.attn2.norm_added_q', 'blocks.23.attn2.norm_added_q'] ``` ![Image](https://github.com/user-attachments/assets/dcfb2837-0fa1-47b5-a220-576662074938) This issue resulted in severe degradation of the generated output. 这是我选择的图像: ![Image](https://github.com/user-attachments/assets/30232325-69bb-439b-b415-99e9f65f8123) 我得到了错误的视频: https://github.com/user-attachments/assets/7a8b55a2-6a71-493a-b7ae-64566b321954 当我使用默认pipe即不采用 group_offload_leaf_stream我得到了正确的结果: https://github.com/user-attachments/assets/9b54c2f2-fa93-422f-b3df-619ee96bb3c8 2.When using the group_offload_block_1_stream method: I encountered a runtime error: "RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same". It appears that the VAE module was not correctly assigned to the GPU device. ``` Traceback (most recent call last): File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 171, in <module> main(args) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 143, in main run_inference() File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 1188, in wrapper val = prof(func)(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 761, in f return func(*args, **kwds) File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 130, in run_inference output = pipe( File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 587, in __call__ latents, condition = self.prepare_latents( File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 392, in prepare_latents latent_condition = retrieve_latents(self.vae.encode(video_condition), generator) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper return method(self, *args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 795, in encode h = self._encode(x) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 762, in _encode out = self.encoder(x[:, :, :1, :, :], feat_cache=self._enc_feat_map, feat_idx=self._enc_conv_idx) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 439, in forward x = self.conv_in(x, feat_cache[idx]) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 78, in forward return super().forward(x) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward return self._conv_forward(input, self.weight, self.bias) File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward return F.conv3d( RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same ``` Request for Help: Are there recommended approaches to ensure all layers are properly executed, especially for the group_offload_leaf_stream method? How can I resolve the device mismatch issue related to the VAE? Any suggestions or guidance would be greatly appreciated! ### Reproduction here is my code ```python import argparse import functools import json import os import pathlib import psutil import time import torch from diffusers import FluxPipeline from diffusers.hooks import apply_group_offloading from memory_profiler import profile import torch import numpy as np from diffusers import AutoencoderKLWan, WanImageToVideoPipeline from diffusers.utils import export_to_video, load_image from transformers import CLIPVisionModel from diffusers import FlowMatchEulerDiscreteScheduler, UniPCMultistepScheduler, WanPipeline def get_memory_usage(): process = psutil.Process(os.getpid()) mem_bytes = process.memory_info().rss return mem_bytes @profile(precision=2) def apply_offload(pipe: FluxPipeline, method: str) -> None: if method == "full_cuda": pipe.to("cuda") elif method == "model_offload": pipe.enable_model_cpu_offload() elif method == "sequential_offload": pipe.enable_sequential_cpu_offload() elif method == "group_offload_block_1": offloader_fn = functools.partial( apply_group_offloading, onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="block_level", num_blocks_per_group=1, use_stream=False, ) list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder])) elif method == "group_offload_leaf": offloader_fn = functools.partial( apply_group_offloading, onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="leaf_level", use_stream=False, ) list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder])) elif method == "group_offload_block_1_stream": offloader_fn = functools.partial( apply_group_offloading, onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="block_level", num_blocks_per_group=1, use_stream=True, ) list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder])) elif method == "group_offload_leaf_stream": offloader_fn = functools.partial( apply_group_offloading, onload_device=torch.device("cuda"), offload_device=torch.device("cpu"), offload_type="leaf_level", use_stream=True, ) list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder])) @profile(precision=2) def load_pipeline(): model_id = "Wan2.1-I2V-14B-480P-Diffusers" image_encoder = CLIPVisionModel.from_pretrained( model_id, subfolder="image_encoder", torch_dtype=torch.float32 ) vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32) scheduler_b = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=3.0) pipe = WanImageToVideoPipeline.from_pretrained( model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16, scheduler=scheduler_b ) return pipe @torch.no_grad() def main(args): os.makedirs(args.output_dir, exist_ok=True) os.makedirs(f"./results/check-wanmulti-framework/{args.method}/", exist_ok=True) pipe = load_pipeline() apply_offload(pipe, args.method) apply_offload_memory_usage = get_memory_usage() torch.cuda.reset_peak_memory_stats() cuda_model_memory = torch.cuda.max_memory_reserved() output_dir = pathlib.Path(args.output_dir) output_dir.mkdir(exist_ok=True, parents=True) run_inference_memory_usage_list = [] def cpu_mem_callback(): nonlocal run_inference_memory_usage_list run_inference_memory_usage_list.append(get_memory_usage()) @profile(precision=2) def run_inference(): image = load_image("./dataset/character-img/imgs3/1.jpeg") max_area = 480 * 832 aspect_ratio = image.height / image.width mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1] height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value prompt = ( "A person smile." ) negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" generator = torch.Generator("cuda").manual_seed(100) output = pipe( image=image, prompt=prompt, negative_prompt=negative_prompt, height=height, width=width, num_frames=81, guidance_scale=5.0, generator=generator, ).frames[0] export_to_video(output, f"./results/check-wanmulti-framework/{args.method}/wanx_diffusers.mp4", fps=16) t1 = time.time() run_inference() torch.cuda.synchronize() t2 = time.time() cuda_inference_memory = torch.cuda.max_memory_reserved() time_required = t2 - t1 # run_inference_memory_usage = sum(run_inference_memory_usage_list) / len(run_inference_memory_usage_list) # print(f"Run inference memory usage list: {run_inference_memory_usage_list}") info = { "time": round(time_required, 2), "cuda_model_memory": round(cuda_model_memory / 1024**3, 2), "cuda_inference_memory": round(cuda_inference_memory / 1024**3, 2), "cpu_offload_memory": round(apply_offload_memory_usage / 1024**3, 2), } with open(output_dir / f"memory_usage_{args.method}.json", "w") as f: json.dump(info, f, indent=4) def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--method", type=str, default="full_cuda", choices=["full_cuda", "model_offload", "sequential_offload", "group_offload_block_1", "group_offload_leaf", "group_offload_block_1_stream", "group_offload_leaf_stream"]) parser.add_argument("--output_dir", type=str, default="./results/offload_profiling") return parser.parse_args() if __name__ == "__main__": args = get_args() main(args) ``` here is my environment ``` Package Version --------------------------------- -------------------- absl-py 2.1.0 accelerate 1.4.0 addict 2.4.0 aiofiles 23.2.1 aiohappyeyeballs 2.4.3 aiohttp 3.10.10 aiosignal 1.3.1 airportsdata 20241001 albucore 0.0.17 albumentations 1.4.18 aliyun-python-sdk-core 2.16.0 aliyun-python-sdk-kms 2.16.5 altair 5.4.1 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 anyio 4.6.2.post1 astor 0.8.1 asttokens 2.4.1 astunparse 1.6.3 async-timeout 4.0.3 attrs 24.2.0 av 13.1.0 beautifulsoup4 4.12.3 blake3 1.0.4 blinker 1.9.0 boto3 1.35.60 botocore 1.35.60 braceexpand 0.1.7 certifi 2024.8.30 cffi 1.17.1 charset-normalizer 3.4.0 click 8.1.7 clip 0.2.0 cloudpickle 3.1.0 coloredlogs 15.0.1 comm 0.2.2 compressed-tensors 0.8.0 ConfigArgParse 1.7 contourpy 1.3.0 controlnet_aux 0.0.7 cpm-kernels 1.0.11 crcmod 1.7 cryptography 44.0.1 cupy-cuda12x 13.3.0 cycler 0.12.1 Cython 3.0.12 dash 2.18.2 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 dashscope 1.22.2 datasets 3.0.1 debugpy 1.8.10 decorator 4.4.2 decord 0.6.0 deepspeed 0.15.2 depyf 0.18.0 diffsynth 1.1.2 diffusers 0.33.0.dev0 dill 0.3.8 diskcache 5.6.3 distro 1.9.0 dnspython 2.7.0 docker-pycreds 0.4.0 easydict 1.13 einops 0.8.0 email_validator 2.2.0 eval_type_backport 0.2.0 exceptiongroup 1.2.2 executing 2.1.0 facexlib 0.3.0 fairscale 0.4.13 fastapi 0.115.2 fastjsonschema 2.20.0 fastrlock 0.8.3 ffmpy 0.4.0 filelock 3.16.1 filterpy 1.4.5 flash-attn 2.6.3 Flask 3.0.3 flatbuffers 24.3.25 fonttools 4.54.1 frozenlist 1.4.1 fsspec 2024.6.1 ftfy 6.3.0 func_timeout 4.3.5 future 1.0.0 fvcore 0.1.5.post20221221 gast 0.6.0 gguf 0.10.0 gitdb 4.0.11 GitPython 3.1.43 google-pasta 0.2.0 gradio 5.5.0 gradio_client 1.4.2 grpcio 1.66.2 h11 0.14.0 h5py 3.12.1 hjson 3.1.0 httpcore 1.0.6 httptools 0.6.4 httpx 0.27.2 huggingface-hub 0.29.1 humanfriendly 10.0 idna 3.10 imageio 2.36.0 imageio-ffmpeg 0.5.1 imgaug 0.4.0 importlib_metadata 8.5.0 iniconfig 2.0.0 interegular 0.3.3 iopath 0.1.10 ipykernel 6.29.5 ipython 8.29.0 ipywidgets 8.1.5 itsdangerous 2.2.0 jaxtyping 0.2.34 jedi 0.19.1 Jinja2 3.1.4 jiter 0.7.0 jmespath 0.10.0 joblib 1.4.2 jsonschema 4.23.0 jsonschema-specifications 2024.10.1 jupyter_client 8.6.3 jupyter_core 5.7.2 jupyterlab_widgets 3.0.13 keras 3.7.0 kiwisolver 1.4.7 lark 1.2.2 lazy_loader 0.4 libclang 18.1.1 libigl 2.5.1 linkify-it-py 2.0.3 llvmlite 0.43.0 lm-format-enforcer 0.10.9 lmdb 1.6.2 loguru 0.7.3 lvis 0.5.3 Markdown 3.7 markdown-it-py 2.2.0 MarkupSafe 2.1.5 matplotlib 3.9.2 matplotlib-inline 0.1.7 mdit-py-plugins 0.3.3 mdurl 0.1.2 memory-profiler 0.61.0 mistral_common 1.5.1 ml-dtypes 0.4.1 modelscope 1.23.2 moviepy 1.0.3 mpmath 1.3.0 msgpack 1.1.0 msgspec 0.18.6 multidict 6.1.0 multiprocess 0.70.16 namex 0.0.8 narwhals 1.10.0 natsort 8.4.0 nbformat 5.10.4 nest-asyncio 1.6.0 networkx 3.4.1 ninja 1.11.1.3 numba 0.60.0 numpy 1.26.4 nvdiffrast 0.3.3 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-cusparselt-cu12 0.6.2 nvidia-ml-py 12.560.30 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 omegaconf 2.3.0 onnxruntime 1.20.0 open3d 0.18.0 openai 1.54.4 openai-clip 1.0.1 opencv-python 4.10.0.84 opencv-python-headless 4.10.0.84 opt_einsum 3.4.0 optree 0.13.1 orjson 3.10.7 oss2 2.19.1 outlines 0.0.46 packaging 24.1 pandas 2.2.3 parso 0.8.4 partial-json-parser 0.2.1.1.post4 peft 0.13.2 pexpect 4.9.0 pillow 10.4.0 pip 24.2 platformdirs 4.3.6 plotly 5.24.1 pluggy 1.5.0 pooch 1.8.2 portalocker 2.10.1 proglog 0.1.10 prometheus_client 0.21.0 prometheus-fastapi-instrumentator 7.0.0 prompt_toolkit 3.0.48 propcache 0.2.0 protobuf 5.28.2 psutil 6.0.0 ptyprocess 0.7.0 pudb 2024.1.2 pure_eval 0.2.3 py-cpuinfo 9.0.0 pyairports 2.1.1 pyarrow 17.0.0 pybind11 2.13.6 pycocoevalcap 1.2 pycocotools 2.0.8 pycountry 24.6.1 pycparser 2.22 pycryptodome 3.21.0 pydantic 2.9.2 pydantic_core 2.23.4 pydub 0.25.1 Pygments 2.18.0 pyiqa 0.1.10 PyMatting 1.1.12 PyMCubes 0.1.6 pyparsing 3.2.0 pyquaternion 0.9.9 pytest 8.3.4 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 python-multipart 0.0.12 pytorch3d 0.7.8 pytz 2024.2 PyYAML 6.0.2 pyzmq 26.2.0 qwen-vl-utils 0.0.10 ray 2.37.0 referencing 0.35.1 regex 2024.9.11 rembg 2.0.59 requests 2.32.3 requests-toolbelt 1.0.0 retrying 1.3.4 rich 13.9.2 rpds-py 0.20.0 ruff 0.6.9 s3transfer 0.10.3 safehttpx 0.1.1 safetensors 0.4.5 scikit-image 0.24.0 scikit-learn 1.5.2 scikit-video 1.1.11 scipy 1.14.1 semantic-version 2.10.0 sentencepiece 0.2.0 sentry-sdk 2.18.0 setproctitle 1.3.3 setuptools 75.2.0 shapely 2.0.7 shellingham 1.5.4 six 1.16.0 sk-video 1.1.10 smmap 5.0.1 sniffio 1.3.1 soupsieve 2.6 stack-data 0.6.3 starlette 0.40.0 SwissArmyTransformer 0.4.12 sympy 1.13.1 tabulate 0.9.0 tenacity 9.0.0 tensorboard 2.18.0 tensorboard-data-server 0.7.2 tensorboardX 2.6.2.2 tensorflow-io-gcs-filesystem 0.37.1 termcolor 2.5.0 thop 0.1.1.post2209072238 threadpoolctl 3.5.0 tifffile 2024.9.20 tiktoken 0.7.0 timm 1.0.11 tokenizers 0.20.3 tomesd 0.1.3 tomli 2.2.1 tomlkit 0.12.0 torch 2.6.0 torchaudio 2.6.0 torchdiffeq 0.2.4 torchsde 0.2.6 torchvision 0.21.0 tornado 6.4.2 tqdm 4.66.5 traitlets 5.14.3 trampoline 0.1.2 transformers 4.46.2 transformers-stream-generator 0.0.4 trimesh 4.5.2 triton 3.2.0 typeguard 2.13.3 typer 0.12.5 typing_extensions 4.12.2 tzdata 2024.2 uc-micro-py 1.0.3 urllib3 2.2.3 urwid 2.6.16 urwid_readline 0.15.1 uvicorn 0.32.0 uvloop 0.21.0 wandb 0.18.7 watchfiles 0.24.0 wcwidth 0.2.13 webdataset 0.2.100 websocket-client 1.8.0 websockets 12.0 Werkzeug 3.0.4 wheel 0.44.0 widgetsnbextension 4.0.13 wrapt 1.17.0 xatlas 0.0.9 xxhash 3.5.0 yacs 0.1.8 yapf 0.43.0 yarl 1.15.3 zipp 3.20.2 ``` ### Logs ```shell ``` ### System Info - 🤗 Diffusers version: 0.33.0.dev0 - Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35 - Running on Google Colab?: No - Python version: 3.10.15 - PyTorch version (GPU?): 2.6.0+cu124 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Huggingface_hub version: 0.29.1 - Transformers version: 4.46.2 - Accelerate version: 1.4.0 - PEFT version: 0.13.2 - Bitsandbytes version: not installed - Safetensors version: 0.4.5 - xFormers version: not installed - Accelerator: NVIDIA A800-SXM4-80GB, 81251 MiB - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @DN6 @a-r-r-o-w
2hard
Title: Lost connection to MySQL server during query Body: When my application throws an exception and I try to access an endpoint again, it returns the error "**(2013, 'Lost connection to MySQL server during query ([Errno 104] Connection reset by peer)')"**. It is as if it were not connected to the database or as if it did not refresh. Using SQLAlchemist I edited the parameters "**pool_timeout=60, pool_recycle=280, pool_size=20, max_overflow=50**" when I call create_engine, but in ORMAR I don't know how to do it. Any idea how to do it? Thanks!
1medium
Title: Fix Frontend Failing Test: paddle - tensor.torch.Tensor.fix Body: To-do List: https://github.com/unifyai/ivy/issues/27500
1medium
Title: [BUG] Container usage from external registry 'docker.io' found cause a warning in the tests Body: ### Description <!--- Describe your issue/bug/request in detail --> We started to get a warning that can be seen [here](https://dev.azure.com/best-practices/recommenders/_build/results?buildId=60846&view=logs&j=475df697-7465-54db-fcd2-cb9bdea8ab03&t=b7db66b6-fa35-5c82-573e-eabcb50ded02) ``` tools/docker/Dockerfile - Container usage from external registry 'docker.io' found. ##[warning]Container security analysis found 1 violations. This repo has one or more docker files having references to images from external registries. Please review https://aka.ms/containers-security-guidance to remove the reference of container images from external registries. Please reach out via teams (https://aka.ms/cssc-teams) or email (cssc@microsoft.com) for any questions or clarifications. ``` This is weird because I don't see any reference to docker.io in the code ### In which platform does it happen? <!--- Describe the platform where the issue is happening (use a list if needed) --> <!--- For example: --> <!--- * Azure Data Science Virtual Machine. --> <!--- * Azure Databricks. --> <!--- * Other platforms. --> ### How do we replicate the issue? <!--- Please be specific as possible (use a list if needed). --> <!--- For example: --> <!--- * Create a conda environment for pyspark --> <!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` --> <!--- * ... --> ### Expected behavior (i.e. solution) <!--- For example: --> <!--- * The tests for SAR PySpark should pass successfully. --> ### Other Comments
1medium
Title: Bypass catchas/cookies/consent windows? Body: Are there Python libraries which allow to bypass diverse consent mechanisms put in place by news outlets for the readers to allow cookies? It would be too cumbersome to develop a headless browser exclusively for trafilatura. A good example would be the newspaper zeit.de. Related to #18. Potential solutions: - headless browser with automatic clickling mechanism - use [AMP](https://www.howtogeek.com/284166/what-is-google-amp-and-why-is-it-in-my-search-results/)-links The output could be piped directly to trafilatura (in a terminal or via Python).
1medium
Title: Enhance the memory efficiency of loading large models (400B) to prevent out-of-memory errors when using tensor parallelism. Body: ### Feature request Support shredded checkpoint file that matches the process rank for Pytorch distributed model creation and tensor papalism inference. ### Motivation When I attempted to test the Llama 405B model with FP8 precision using tensor parallelism (TP = 4), the server, which has 1.5TB of RAM, experienced process termination due to all four processes consuming the entire memory. However, if each process could import the model using a shared weight file and create a model with PyTorch distributed tensor, it would only require 405GB of RAM. ### Your contribution I can help to create test cases for this feature.
2hard
Title: detail about prefix tuning Body: In prefix_tuning_template.py file https://github.com/thunlp/OpenPrompt/blob/675545ce1f946aa186efda8e8640dbc29fd1159f/openprompt/prompts/prefix_tuning_template.py#L207 The above code pad the attention_mask for extra prompt tokens. Why the function 'torch.zeros' is used here? Should we use 'torch.ones' here?
1medium
Title: Demo for highlighting text in a pdf file for LLM RAG purposes Body: - [x] I have searched to see if a similar issue already exists. **Is your feature request related to a problem? Please describe.** I wanted a way to simply show a PDF with highlighted text at the right page (for LLM RAG purposes). **Describe the solution you'd like** A demo to build such an app.
1medium