text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: How to implement a custom operator that support multiple compute device (CPU, CUDA)?
Body: # Ask a Question
I tried the following implementation, but had no effect.
CUDA implementation:
```
struct CustomOPGpu : Ort::CustomOpBase<CustomOPGpu , CustomKernel> {
const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CUDAExecutionProvider"; };
...
}
```
CPU implementation:
```
struct CustomOPCpu : Ort::CustomOpBase<CustomOPCpu , CustomKernel> {
const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CPUExecutionProvider"; };
...
}
```
The doc (https://onnxruntime.ai/docs/reference/operators/add-custom-op.html) doesn't have any sample codes.
### Question
<!-- Explain your question here. -->
### Further information
- Relevant Area: <!--e.g., model usage, backend, best practices, converters, shape_inference, version_converter, training, test, operators, IR, ONNX Hub, data preprocessing, CI pipelines. -->
- Is this issue related to a specific model?
**Model name**: <!-- *e.g. mnist* -->
**Model opset**: <!-- *e.g. 17* -->
### Notes
<!-- Any additional information, code snippets. -->
| 1medium
|
Title: Support for LayerCAM
Body: Our paper "LayerCAM: Exploring Hierarchical Class Activation Maps for Localization" is accepted by TIP recently, which can visualize the class activation maps from any cnn layer of an off-the-shelf network. Could you add our method to your popular repository for more people to try this method? Our method is a simple modification of Grad-CAM. It should easy to implement. Here is the [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9462463) and [code](https://github.com/PengtaoJiang/LayerCAM). Hope for your reply. | 1medium
|
Title: The example 'is is not what it is!' has some questions
Body: i run this code in python 3 and found that
a = 257
b = 257
a is b
True
(Maybe this vision has updated) | 0easy
|
Title: [Feature request] Direct Java access
Body: **🚀 Feature Description**
As a Java Developer, I would like access to this voice synthesis from within the JVM environment. There are many ways to accomplish this, and that is a good topic for discussion. In the meantime, I have worked up and tested a solution for anyone in the same boat as me. I was looking for a Java TTS engine using WaveNet technology, and came up short. My hope is that even if java support is not official added, then this issue can serve as a breaadcrumb trail for someone willing to work to get it working for themselves.
**Solution**
I added com.github.dockerjava to my project as well as OkHTTP. I use the docker manager to load up a Dockerfile that extends the regular docker release with pre-caching the voice packs i intend to use, and changing the starting conditions.
https://github.com/Halloween2020TheChild/CoquiDocker/blob/main/Dockerfile
Then i wrote a small management class in Java to load the Docker image, and launch the server.py within the Docker, with the voice configurable by the user in Java. Then the TTS server is communicated with by OkHTTP to send the text and receive the audio.
https://github.com/CommonWealthRobotics/bowler-script-kernel/blob/2f63daca4153e9cf97b5d31f534e5c051bad6720/src/main/java/com/neuronrobotics/bowlerstudio/CoquiDockerManager.java
**Alternative Solutions**
I was hoping to be able to load the network using the DeepJavaLibrary https://github.com/deepjavalibrary/djl to make the solution pure Java. Unfortunately that was more of a lift than i was able to muster for my project, and the Docker wrapping worked very well. I think it would be worthwhile to have a pure DJL implementation to make these voices accessible across all platforms.
**Additional context**
For anyone interested, i am building a modern Zoltar fortune telling machine with the most advance neural nets i can get my hands on, check it out if your are interested: https://github.com/Halloween2020TheChild/Zoltar
This java layer that i am putting up as a solution example is adding Coqui to BowlerStudio, a free, open source, robotics control engine: https://commonwealthrobotics.com/
| 1medium
|
Title: [LNL][Windows][Inductor] Application error: The memory could not be read.
Body: ### 🐛 Describe the bug
When running E2E inductor on LNL, the following error appears randomly:

### Versions
- stock pytorch :
- pip install torch --index-url https://download.pytorch.org/whl/test/xpu
- git clone https://github.com/pytorch/pytorch.git
- git checkout b1940b5867e40e40ebdce4db76f76d3d0b71d3f4
- torch-xpu-ops: Commit (pin) - 026b2c8c7c92a7b2cec5d26334006e3423251cc6
- Driver: 32.0.101.6647 | 2hard
|
Title: [BUG] Pascal performance halved with new version.
Body: **Describe the bug**
I went to try new autogptq since there were changes to the kernel.
Performance gets cut in half now if you enable fused attention where as of 0.3.2 it was doing better.
30b-int4
```
latest:
Output generated in 47.31 seconds (4.21 tokens/s, 199 tokens, context 22, seed 1602137517
Output generated in 47.34 seconds (4.20 tokens/s, 199 tokens, context 22, seed 734386331)
disabled fused attention
Output generated in 23.74 seconds (8.38 tokens/s, 199 tokens, context 22, seed 565801207)
0.3.2 + fused attention
Output generated in 23.03 seconds (8.64 tokens/s, 199 tokens, context 22, seed 988359984)
Output generated in 22.64 seconds (8.79 tokens/s, 199 tokens, context 22, seed 513951538)
0.3.2 no fused attn
Output generated in 24.04 seconds (8.28 tokens/s, 199 tokens, context 22, seed 528918284)
```
**Hardware details**
P6000/P40 Compute 6.1
**Software version**
Git Autogptq
**Expected behavior**
Identical to previous autogptq when disable_exllama=True
**Additional context**
Some change to fused attn must be using FP16. Obviously at full context this will blow up to unusable. The reason it sort of matters is that for my 3090s I can simply use exllama but for older cards I can't really use anything else.
| 2hard
|
Title: is there a way to wait a given timeout *after* I got one response?
Body: This is relevant for multicast - my dns.query.udp returns upon the first reply to the query (as expected), where I would like to wait a certain amount of seconds before returning, allowing some time for more responses from other devices:
```
import dns.name
import dns.message
import dns.query
import dns.flags
SERVICE_TO_QUERY = "_http._tcp.local"
QUERY_ADDRESS = '224.0.0.251'
domain = dns.name.from_text(SERVICE_TO_QUERY)
request = dns.message.make_query(domain, dns.rdatatype.ANY)
response = dns.query.udp(request, QUERY_ADDRESS, timeout=10, port=5353)
for a in response.additional:
print(a)
```
Any way I could do that?
| 1medium
|
Title: [Use-case] Auto-notifier if popular, relevant posts are on HN
Body: Input: Query (eg "fine-tuning LLMs"), minimum number of comments, how frequently to check per day, email)
Output: Emails the link to the post + summary after each run
Sources: https://hn.algolia.com/api, https://modal.com/docs/examples/hackernews_alerts | 1medium
|
Title: [Feature]
Body: `with requests.get(task['url'], stream=True, timeout=60, proxies=proxies,
headers=DEFAULT_HEADERS, verify=False) as response:`
报错:stream参数不存在。是目前不支持这种流式读取吗?什么时候能支持呢? | 1medium
|
Title: BUG: SeriesGroupBy.apply applies function to pandas DataFrame instead of to pandas Series
Body: ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
df = pd.DataFrame(['a', 'b'])
df[0].groupby(df[0]).apply(lambda x: x.to_frame())
```
### Issue Description
in `SeriesGroupBy`, `func` always operates on a series, and never on a dataframe. We need to pass a signal [here](https://github.com/modin-project/modin/blob/cbb3b5da2cff8db2ff41715f1e98ae61d632eba1/modin/pandas/groupby.py#L1863) that `func` applies to series, not to frames.
code example works in pandas:
```python
import pandas as pd
df = pd.DataFrame(['a', 'b'])
df[0].groupby(df[0]).apply(lambda x: x.to_frame())
```
### Expected Behavior
should match pandas
### Error Logs
<details>
```python-traceback
---------------------------------------------------------------------------
RayTaskError(AttributeError) Traceback (most recent call last)
Cell In[1], line 4
1 import modin.pandas as pd
3 df = pd.DataFrame(['a', 'b'])
----> 4 df[0].groupby(df[0]).apply(lambda x: x.to_frame())
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/groupby.py:1863, in SeriesGroupBy.apply(self, func, *args, **kwargs)
1862 def apply(self, func, *args, **kwargs):
-> 1863 return super().apply(func, *args, **kwargs)
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/groupby.py:654, in DataFrameGroupBy.apply(self, func, include_groups, *args, **kwargs)
651 if not isinstance(func, BuiltinFunctionType):
652 func = wrap_udf_function(func)
--> 654 apply_res = self._wrap_aggregation(
655 qc_method=type(self._query_compiler).groupby_agg,
656 numeric_only=False,
657 agg_func=func,
658 agg_args=args,
659 agg_kwargs={**kwargs, "include_groups": include_groups},
660 how="group_wise",
661 )
662 reduced_index = pandas.Index([MODIN_UNNAMED_SERIES_LABEL])
663 if not isinstance(apply_res, Series) and apply_res.columns.equals(
664 reduced_index
665 ):
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/groupby.py:1664, in DataFrameGroupBy._wrap_aggregation(self, qc_method, numeric_only, agg_args, agg_kwargs, **kwargs)
1661 else:
1662 groupby_qc = self._query_compiler
-> 1664 return type(self._df)(
1665 query_compiler=qc_method(
1666 groupby_qc,
1667 by=self._by,
1668 axis=self._axis,
1669 groupby_kwargs=self._kwargs,
1670 agg_args=agg_args,
1671 agg_kwargs=agg_kwargs,
1672 drop=self._drop,
1673 **kwargs,
1674 )
1675 )
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/series.py:151, in Series.__init__(self, data, index, dtype, name, copy, fastpath, query_compiler)
137 name = data.name
139 query_compiler = from_pandas(
140 pandas.DataFrame(
141 pandas.Series(
(...)
149 )
150 )._query_compiler
--> 151 self._query_compiler = query_compiler.columnarize()
152 if name is not None:
153 self.name = name
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/storage_formats/base/query_compiler.py:1258, in BaseQueryCompiler.columnarize(self)
1255 return self
1257 result = self
-> 1258 if len(self.columns) != 1 or (
1259 len(self.index) == 1 and self.index[0] == MODIN_UNNAMED_SERIES_LABEL
1260 ):
1261 result = self.transpose()
1262 result._shape_hint = "column"
File ~/sources/modin/modin/core/storage_formats/pandas/query_compiler.py:99, in _get_axis.<locals>.<lambda>(self)
97 return lambda self: self._modin_frame.index
98 else:
---> 99 return lambda self: self._modin_frame.columns
File ~/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py:691, in PandasDataframe._get_columns(self)
682 """
683 Get the columns from the cache object.
684
(...)
688 An index object containing the column labels.
689 """
690 if self.has_columns_cache:
--> 691 columns, column_widths = self._columns_cache.get(return_lengths=True)
692 else:
693 columns, column_widths = self._compute_axis_labels_and_lengths(1)
File ~/sources/modin/modin/core/dataframe/pandas/metadata/index.py:202, in ModinIndex.get(self, return_lengths)
200 if not self.is_materialized:
201 if callable(self._value):
--> 202 index, self._lengths_cache = self._value()
203 self._value = ensure_index(index)
204 elif self._value is None:
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py:799, in PandasDataframe._compute_axis_labels_and_lengths(self, axis, partitions)
797 if partitions is None:
798 partitions = self._partitions
--> 799 new_index, internal_idx = self._partition_mgr_cls.get_indices(axis, partitions)
800 return new_index, list(map(len, internal_idx))
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/partitioning/partition_manager.py:1045, in PandasDataframePartitionManager.get_indices(cls, axis, partitions, index_func)
1043 if len(target):
1044 new_idx = [idx.apply(func) for idx in target[0]]
-> 1045 new_idx = cls.get_objects_from_partitions(new_idx)
1046 else:
1047 new_idx = [pandas.Index([])]
File ~/sources/modin/modin/logging/logger_decorator.py:125, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
110 """
111 Compute function with logging if Modin logging is enabled.
112
(...)
122 Any
123 """
124 if LogMode.get() == "disable":
--> 125 return obj(*args, **kwargs)
127 logger = get_logger()
128 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/partitioning/partition_manager.py:986, in PandasDataframePartitionManager.get_objects_from_partitions(cls, partitions)
982 partitions[idx] = part.force_materialization()
983 assert all(
984 [len(partition.list_of_blocks) == 1 for partition in partitions]
985 ), "Implementation assumes that each partition contains a single block."
--> 986 return cls._execution_wrapper.materialize(
987 [partition.list_of_blocks[0] for partition in partitions]
988 )
989 return [partition.get() for partition in partitions]
File ~/sources/modin/modin/core/execution/ray/common/engine_wrapper.py:129, in RayWrapper.materialize(cls, obj_id)
126 return ray.get(obj_id) if isinstance(obj_id, RayObjectRefTypes) else obj_id
128 if all(isinstance(obj, RayObjectRefTypes) for obj in obj_id):
--> 129 return ray.get(obj_id)
131 ids = {}
132 result = []
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/auto_init_hook.py:24, in wrap_auto_init.<locals>.auto_init_wrapper(*args, **kwargs)
21 @wraps(fn)
22 def auto_init_wrapper(*args, **kwargs):
23 auto_init_ray()
---> 24 return fn(*args, **kwargs)
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/client_mode_hook.py:103, in client_mode_hook.<locals>.wrapper(*args, **kwargs)
101 if func.__name__ != "init" or is_client_mode_enabled_by_default:
102 return getattr(ray, func.__name__)(*args, **kwargs)
--> 103 return func(*args, **kwargs)
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/worker.py:2563, in get(object_refs, timeout)
2561 worker.core_worker.dump_object_store_memory_usage()
2562 if isinstance(value, RayTaskError):
-> 2563 raise value.as_instanceof_cause()
2564 else:
2565 raise value
RayTaskError(AttributeError): ray::remote_exec_func() (pid=65613, ip=127.0.0.1)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RayTaskError: ray::_deploy_ray_func() (pid=65613, ip=127.0.0.1)
File "/Users/mvashishtha/sources/modin/modin/core/execution/ray/implementations/pandas_on_ray/partitioning/virtual_partition.py", line 324, in _deploy_ray_func
result = deployer(axis, f_to_deploy, f_args, f_kwargs, *deploy_args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/logging/logger_decorator.py", line 125, in run_and_log
return obj(*args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/dataframe/pandas/partitioning/axis_partition.py", line 433, in deploy_axis_func
result = func(dataframe, *f_args, **f_kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py", line 1989, in _tree_reduce_func
series_result = func(df, *args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py", line 4077, in apply_func
result = operator(df.groupby(by, **kwargs))
File "/Users/mvashishtha/sources/modin/modin/core/storage_formats/pandas/query_compiler.py", line 3752, in <lambda>
operator=lambda grp: agg_func(grp, *agg_args, **agg_kwargs),
File "/Users/mvashishtha/sources/modin/modin/core/storage_formats/pandas/query_compiler.py", line 3733, in agg_func
result = agg_method(grp, original_agg_func, *args, **kwargs)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1824, in apply
result = self._python_apply_general(f, self._selected_obj)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1885, in _python_apply_general
values, mutated = self._grouper.apply_groupwise(f, data, self.axis)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 919, in apply_groupwise
res = f(group)
File "/Users/mvashishtha/sources/modin/modin/utils.py", line 689, in wrapper
result = func(*args, **kwargs)
File "<ipython-input-1-cf24b11d0b7f>", line 4, in <lambda>
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/generic.py", line 6296, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'to_frame'
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c75343687799e3b69fc04a99b8a9d11eab1fd984
python : 3.9.18.final.0
python-bits : 64
OS : Darwin
OS-release : 23.4.0
Version : Darwin Kernel Version 23.4.0: Wed Feb 21 21:45:49 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.28.0+23.gc7534368
ray : 2.8.0
dask : None
distributed : None
hdk : None
pandas dependencies
-------------------
pandas : 2.2.1
numpy : 1.26.1
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.3
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.17.2
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.10.0
gcsfs : None
matplotlib : 3.8.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| 1medium
|
Title: SequentialExecutor is not compatible with task-sdk
Body: ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
`SequentialExecutor` which is the default executor fails in the main branch. It seems #47453 which removed the run command is still used with SequentialExecutor and the executor might need to be ported to task-sdk I guess
```
[2025-03-11T16:08:34.960+0530] {base_executor.py:301} DEBUG - 32 open slots for executor SequentialExecutor
[2025-03-11T16:08:34.961+0530] {base_executor.py:253} DEBUG - Calling the <class 'airflow.executors.sequential_executor.SequentialExecutor'> sync method
[2025-03-11T16:08:34.961+0530] {sequential_executor.py:85} INFO - Executing command: ['airflow', 'tasks', 'run', 'example_branch_operator', 'branching', 'scheduled__2025-03-11T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/example_branch_operator.py']
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:602 DeprecationWarning: The web_server_port option in [webserver] has been moved to the port option in [api] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:608 DeprecationWarning: The workers option in [webserver] has been moved to the workers option in [api] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:620 DeprecationWarning: The web_server_host option in [webserver] has been moved to the host option in [api] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:625 DeprecationWarning: The access_logfile option in [webserver] has been moved to the access_logfile option in [api] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:640 DeprecationWarning: The web_server_ssl_cert option in [webserver] has been moved to the ssl_cert option in [api] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/airflow/cli/cli_config.py:645 DeprecationWarning: The web_server_ssl_key option in [webserver] has been moved to the ssl_key option in [api] - the old setting has been used, but please update your config.
Usage: airflow tasks [-h] COMMAND ...
Manage tasks
Positional Arguments:
COMMAND
clear Clear a set of task instance, as if they never ran
failed-deps Returns the unmet dependencies for a task instance
list List the tasks within a DAG
render Render a task instance's template(s)
state Get the status of a task instance
states-for-dag-run
Get the status of all task instances in a dag run
test Test a task instance
Options:
-h, --help show this help message and exit
airflow tasks command error: argument COMMAND: invalid choice: 'run' (choose from 'clear', 'failed-deps', 'list', 'render', 'state', 'states-for-dag-run', 'test'), see help above.
[2025-03-11T16:08:37.977+0530] {base_executor.py:453} DEBUG - Changing state: TaskInstanceKey(dag_id='example_branch_operator', task_id='branching', run_id='scheduled__2025-03-11T00:00:00+00:00', try_number=1, map_index=-1)
```
### What you think should happen instead?
_No response_
### How to reproduce
1. Set executor as `SequentialExecutor` in airflow.cfg
2. Run a sample example dag
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 1medium
|
Title: [BUG] Adding a new page via CMS wizard does not start in "edit" mode.
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
Adding a new page via CMS wizard does not bring up the new page in "edit" mode. It appears to be in preview mode.
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
## Steps to reproduce
Following the quick-start guide
* Install / start new container
* Add the first new page
* Click Save
* Page is in Preview mode.
* You now have to click [Edit] in order click Publish per the quick-start guide.
<!--
Clear steps describing how to reproduce the issue.
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
## Expected behaviour
<!--
A clear and concise description of what you expected to happen.
-->
User should land an their newly created page in Edit mode so they can Publish after saving from the modal dialog.
## Actual behaviour
<!--
A clear and concise description of what is actually happening.
-->
## Screenshots
<!--If applicable, add screenshots to help explain your problem.
-->
## Additional information (CMS/Python/Django versions)
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
CMS 4.1rc4
## Do you want to help fix this issue?
<!--
The django CMS project is managed and kept alive by its open source community and is backed by the [django CMS Association](https://www.django-cms.org/en/about-us/). We therefore welcome any help and are grateful if people contribute to the project. Please use 'x' to check the items below.
-->
* [x] Yes, I want to help fix this issue and I will join #workgroup-pr-review on [Slack](https://www.django-cms.org/slack) to confirm with the community that a PR is welcome.
* [ ] No, I only want to report the issue.
| 1medium
|
Title: can A matrix, learned by MMC, have negative values?
Body: I got some questions after running the MMC code.
Q1.
I've run test_iris function of Test_MMC class in metric_learn_test.py
The expected matrix A of mahalanobis distance is
expected = [[+0.00046504, +0.00083371, -0.00111959, -0.00165265],
[+0.00083371, +0.00149466, -0.00200719, -0.00296284],
[-0.00111959, -0.00200719, +0.00269546, +0.00397881],
[-0.00165265, -0.00296284, +0.00397881, +0.00587320]]
in _fit_full case.
However, what I know about mahalanobis distance is that matrix A have to be positive.
The optimization process in mmc._fit_full cannot always return the result that satisfy the second constraint (A>0) .
Is it ok?
(I did clustering using mahalanobis distance which was trained by MMC on iris data, but the clustering result was not similar to the label(class)ㅜㅜ)
Q2.
There are some detailed implementations that did not explain on the original paper like learning rate control,
if satisfy and (obj > obj_previous or cycle == 0):
alpha *= 1.05
else:
alpha /= 2
Is all this MMC implementation based on the original paper implementation?
Thanks a lot for your effort!
| 1medium
|
Title: how to change imdb.pkl to txt files
Body: in cnn_sentence_classification.py , there is this line
train, test, _ = imdb.load_data(path='imdb.pkl', n_words=10000, valid_portion=0.1)
how to change the pkl data to txt files? | 0easy
|
Title: Have problem when deploy in AWS my Django Ninja project
Body: I have a Django Ninja but have problems to access some endpoints when the project is Deployet in AWS, some weeks ago this project is working fine, recently start having problems.
| 1medium
|
Title: Why does Mixins with resolver not work with ModelSchema?
Body: Please describe what you are trying to achieve
I wanted to mix in resolver to ModelSchemas so I do not have to write them all the time.
Please include code examples (like models code, schemes code, view function) to help understand the issue
First define a Mixin class with resolver
```
from pydantic import field_validator
from appcore.services.utils import slugify
class IdSerializerOutMixin:
"""
Mixin to convert uuid4 to websafebase64 id to be used in the response
"""
id: str
@staticmethod
def resolve_id(obj):
return slugify(obj.id)
```
This works
```
from ninja import ModelSchema
from appcore.serializers.commons import IdSerializerOutMixin
from appcore.services.utils import slugify
from appstore.models.collections import Collection
class CreateCollectionPostOut(ModelSchema):
id: str
class Meta:
model = Collection
exclude = ["user"]
@staticmethod
def resolve_id(obj):
return slugify(obj.id)
```
This does not work
```
from ninja import ModelSchema
from appcore.serializers.commons import IdSerializerOutMixin
from appcore.services.utils import slugify
from appstore.models.collections import Collection
class CreateCollectionPostOut(ModelSchema, IdSerializerOutMixin):
class Meta:
model = Collection
exclude = ["user"]
```
Thanks! | 1medium
|
Title: Unable to use Seq2SeqTrainingArguments and Seq2SeqTrainer
Body: ### System Info
I am using the following versions:
- tensorflow 2.18.0
- tensorboard 2.18.0
- transformers 4.49.0
- keras 3.8.0
- Python 3.10.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to use `Seq2SeqTrainingArguments` and `Seq2SeqTrainer`. The code was working using old version of `transformers 4.28.1`.
Here is the minimal executable example:
```
import transformers
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
training_args = Seq2SeqTrainingArguments()
```
The main error is:
```
RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its traceback):
Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
```
I am using the following versions:
- tensorflow 2.18.0
- tensorboard 2.18.0
- transformers 4.49.0
- keras 3.8.0
- Python 3.10.0
But the official example uses similar codes: https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py#L595
**My question is**: apart from downgrading the `transformers`, how do I fix this issue? Thanks.
---
Full error is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1863, in _LazyModule._get_module(self, module_name)
1862 try:
-> 1863 return importlib.import_module("." + module_name, self.__name__)
1864 except Exception as e:
File /opt/tljh/user/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File ~/.local/lib/python3.10/site-packages/transformers/modeling_tf_utils.py:38
37 from . import DataCollatorWithPadding, DefaultDataCollator
---> 38 from .activations_tf import get_tf_activation
39 from .configuration_utils import PretrainedConfig
File ~/.local/lib/python3.10/site-packages/transformers/activations_tf.py:22
21 try:
---> 22 import tf_keras as keras
23 except (ModuleNotFoundError, ImportError):
File ~/.local/lib/python3.10/site-packages/tf_keras/__init__.py:3
1 """AUTOGENERATED. DO NOT EDIT."""
----> 3 from tf_keras import __internal__
4 from tf_keras import activations
File ~/.local/lib/python3.10/site-packages/tf_keras/__internal__/__init__.py:6
5 from tf_keras.__internal__ import losses
----> 6 from tf_keras.__internal__ import models
7 from tf_keras.__internal__ import optimizers
File ~/.local/lib/python3.10/site-packages/tf_keras/__internal__/models/__init__.py:3
1 """AUTOGENERATED. DO NOT EDIT."""
----> 3 from tf_keras.src.models.cloning import clone_and_build_model
4 from tf_keras.src.models.cloning import in_place_subclassed_model_state_restoration
File ~/.local/lib/python3.10/site-packages/tf_keras/src/__init__.py:21
15 """Implementation of the TF-Keras API, the high-level API of TensorFlow.
16
17 Detailed documentation and user guides are available at
18 [keras.io](https://keras.io).
19 """
---> 21 from tf_keras.src import applications
22 from tf_keras.src import distribute
File ~/.local/lib/python3.10/site-packages/tf_keras/src/applications/__init__.py:18
15 """Keras Applications are premade architectures with pre-trained weights."""
---> 18 from tf_keras.src.applications.convnext import ConvNeXtBase
19 from tf_keras.src.applications.convnext import ConvNeXtLarge
File ~/.local/lib/python3.10/site-packages/tf_keras/src/applications/convnext.py:33
32 from tf_keras.src.applications import imagenet_utils
---> 33 from tf_keras.src.engine import sequential
34 from tf_keras.src.engine import training as training_lib
File ~/.local/lib/python3.10/site-packages/tf_keras/src/engine/sequential.py:24
23 from tf_keras.src.engine import base_layer
---> 24 from tf_keras.src.engine import functional
25 from tf_keras.src.engine import input_layer
File ~/.local/lib/python3.10/site-packages/tf_keras/src/engine/functional.py:33
32 from tf_keras.src.engine import node as node_module
---> 33 from tf_keras.src.engine import training as training_lib
34 from tf_keras.src.engine import training_utils
File ~/.local/lib/python3.10/site-packages/tf_keras/src/engine/training.py:48
47 from tf_keras.src.saving import pickle_utils
---> 48 from tf_keras.src.saving import saving_api
49 from tf_keras.src.saving import saving_lib
File ~/.local/lib/python3.10/site-packages/tf_keras/src/saving/saving_api.py:25
24 from tf_keras.src.saving import saving_lib
---> 25 from tf_keras.src.saving.legacy import save as legacy_sm_saving_lib
26 from tf_keras.src.utils import io_utils
File ~/.local/lib/python3.10/site-packages/tf_keras/src/saving/legacy/save.py:27
26 from tf_keras.src.saving.legacy.saved_model import load as saved_model_load
---> 27 from tf_keras.src.saving.legacy.saved_model import load_context
28 from tf_keras.src.saving.legacy.saved_model import save as saved_model_save
File ~/.local/lib/python3.10/site-packages/tf_keras/src/saving/legacy/saved_model/load_context.py:68
65 return _load_context.in_load_context()
---> 68 tf.__internal__.register_load_context_function(in_load_context)
AttributeError: module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1863, in _LazyModule._get_module(self, module_name)
1862 try:
-> 1863 return importlib.import_module("." + module_name, self.__name__)
1864 except Exception as e:
File /opt/tljh/user/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File ~/.local/lib/python3.10/site-packages/transformers/integrations/integration_utils.py:36
34 import packaging.version
---> 36 from .. import PreTrainedModel, TFPreTrainedModel
37 from .. import __version__ as version
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1851, in _LazyModule.__getattr__(self, name)
1850 elif name in self._class_to_module.keys():
-> 1851 module = self._get_module(self._class_to_module[name])
1852 value = getattr(module, name)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1865, in _LazyModule._get_module(self, module_name)
1864 except Exception as e:
-> 1865 raise RuntimeError(
1866 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1867 f" traceback):\n{e}"
1868 ) from e
RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1863, in _LazyModule._get_module(self, module_name)
1862 try:
-> 1863 return importlib.import_module("." + module_name, self.__name__)
1864 except Exception as e:
File /opt/tljh/user/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File ~/.local/lib/python3.10/site-packages/transformers/trainer_seq2seq.py:29
28 from .integrations.fsdp import is_fsdp_managed_module
---> 29 from .trainer import Trainer
30 from .utils import is_datasets_available, logging
File ~/.local/lib/python3.10/site-packages/transformers/trainer.py:42
40 # Integrations must be imported before ML frameworks:
41 # isort: off
---> 42 from .integrations import (
43 get_reporting_integration_callbacks,
44 )
46 # isort: on
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1851, in _LazyModule.__getattr__(self, name)
1850 elif name in self._class_to_module.keys():
-> 1851 module = self._get_module(self._class_to_module[name])
1852 value = getattr(module, name)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1865, in _LazyModule._get_module(self, module_name)
1864 except Exception as e:
-> 1865 raise RuntimeError(
1866 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1867 f" traceback):\n{e}"
1868 ) from e
RuntimeError: Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[2], line 2
1 import transformers
----> 2 from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer
4 training_args = Seq2SeqTrainingArguments()
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1851, in _LazyModule.__getattr__(self, name)
1849 value = Placeholder
1850 elif name in self._class_to_module.keys():
-> 1851 module = self._get_module(self._class_to_module[name])
1852 value = getattr(module, name)
1853 elif name in self._modules:
File ~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:1865, in _LazyModule._get_module(self, module_name)
1863 return importlib.import_module("." + module_name, self.__name__)
1864 except Exception as e:
-> 1865 raise RuntimeError(
1866 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1867 f" traceback):\n{e}"
1868 ) from e
RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its traceback):
Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'tensorflow._api.v2.compat.v2.__internal__' has no attribute 'register_load_context_function'
```
Steps to reproduce:
1. Install `transformers 4.49.0` and other libraries listed above (with their dependencies installed); no conflict is reported.
2. Write the minimal example above in the Jupyter Notebook / a Python script (results are the same).
3. The above errors occurred.
### Expected behavior
In `transformers 4.28.1`, the `Seq2SeqTrainingArguments` won't cause errors. The translation example in this GitHub is using the same code as well. No error should be expected. | 1medium
|
Title: If my exact string syntax is known, is there a way to pass it as a parameter to improve accuracy?
Body: I need to extract a single numeric string per image.
The strings are formatted.
I know the exact syntax of the strings: `(-)###0.000_(-)###0.000_/_(-)##0.00_(-)##0.00_/_0.000`
Where:
1. `(-)` is absent when positive or a minus sign when negative.
2. `#` is absent unless it is a number `1 through 9`
3. `0` is any number `0 through 9` (`0` is never absent)
4. `_` is a space and `/` is a forward slash.
The string is on a single line and the x/y coordinates of its bounding box are known and constant.
The character size and font are known and constant.
Is it possible to inform the model of this syntax?
If so, would there be any accuracy or performance advantage? | 1medium
|
Title: Uncovered files from dirs without __init__.py are not reported
Body: >This feature is built into coverage, but with a caveat: you have to use paths for the source.
>Eg: if you use `py.test --cov=somepath` then you'll get all files in `somepath` in the report.
_Originally posted by @ionelmc in https://github.com/pytest-dev/pytest-cov/issues/88#issuecomment-139555501_
If I have a file src/utils/foo.py an run
`pytest --cov=src`
I would expext that foo.py is reported as uncovered. However, it is not.
If I run
`pytest --cov=src/utils`
the file is reported as uncovered.
=> How can I tell pytest-cov to **look for uncovered files recursively**? Is there some flag I can set in pyproject.toml or do I need to use some regular expression for some file path property?
(Adding empty __init__.py files would also help to identify uncovered files. However, adding __init__.py files only for determining coverage seems to be weird. Especially as the need for those files vanishes once the tests are written.)
I use pytest 7.2.0, pytest-cov 4.0.0 and python 3.10.5
My pyproject.toml file contains following settings:
```
[tool.pytest.ini_options]
# Also see
# https://docs.pytest.org/en/7.1.x/customize.html#pyproject-toml
# https://pytest-cov.readthedocs.io/en/latest/config.html
# If you want to see console output from tests, include -s flag
addopts = [
# !! do not include --cov option here because it would destroy PyCharm debug feature !!
# Also see https://stackoverflow.com/questions/40718760/unable-to-debug-in-pycharm-with-pytest
'--junitxml=report.xml',
'--import-mode=importlib',
'-s' # enables logging output in console while debugging tests
]
pythonpath = ['src', 'test']
testpaths = 'test'
[tool.coverage.run]
source = ['src']
[tool.coverage.report]
# Also see
# https://coverage.readthedocs.io/en/6.4.4/config.html#config
fail_under = 90
show_missing = true
exclude_lines = ['if __name__ == .__main__.:']
[tool.coverage.html]
directory='.coverage'
```
| 1medium
|
Title: Dependency Dashboard
Body: This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.<br>[View this repository on the Mend.io Web Portal](https://developer.mend.io/github/RobertCraigie/prisma-client-py).
## Config Migration Needed
- [ ] <!-- create-config-migration-pr --> Select this checkbox to let Renovate create an automated Config Migration PR.
## Rate-Limited
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
- [ ] <!-- unlimit-branch=renovate/pytest-subprocess-1.x -->chore(deps): update dependency pytest-subprocess to v1.5.3
- [ ] <!-- unlimit-branch=renovate/dirty-equals-0.x -->chore(deps): update dependency dirty-equals to v0.9.0
- [ ] <!-- unlimit-branch=renovate/inline-snapshot-0.x -->chore(deps): update dependency inline-snapshot to v0.20.9
- [ ] <!-- unlimit-branch=renovate/mock-5.x -->chore(deps): update dependency mock to v5.2.0
- [ ] <!-- unlimit-branch=renovate/nox-2024.x -->chore(deps): update dependency nox to v2024.10.9
- [ ] <!-- unlimit-branch=renovate/pydantic-2.x -->chore(deps): update dependency pydantic to v2.10.6
- [ ] <!-- unlimit-branch=renovate/actions -->chore(deps): update dependency python to 3.13
- [ ] <!-- unlimit-branch=renovate/ruff-0.x -->chore(deps): update dependency ruff to v0.11.2
- [ ] <!-- unlimit-branch=renovate/syrupy-4.x -->chore(deps): update dependency syrupy to v4.9.1
- [ ] <!-- unlimit-branch=renovate/typer-0.x -->chore(deps): update dependency typer to v0.15.2
- [ ] <!-- unlimit-branch=renovate/wheel-0.x -->chore(deps): update dependency wheel to v0.45.1
- [ ] <!-- unlimit-branch=renovate/python-3.x -->chore(deps): update python docker tag to v3.13
- [ ] <!-- unlimit-branch=renovate/nox-2025.x -->chore(deps): update dependency nox to v2025
- [ ] <!-- unlimit-branch=renovate/pre-commit-4.x -->chore(deps): update dependency pre-commit to v4
- [ ] <!-- unlimit-branch=renovate/twine-6.x -->chore(deps): update dependency twine to v6
- [ ] <!-- create-all-rate-limited-prs -->🔐 **Create all rate-limited PRs at once** 🔐
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/pyright-1.x -->[chore(deps): update dependency pyright to v1.1.397](../pull/1036)
- [ ] <!-- rebase-branch=renovate/slotscheck-0.x -->[chore(deps): update dependency slotscheck to v0.19.1](../pull/1048)
- [ ] <!-- rebase-branch=renovate/mkdocs-1.x -->[chore(deps): update dependency mkdocs to v1.6.1](../pull/954)
- [ ] <!-- rebase-branch=renovate/mkdocs-material-9.x -->[chore(deps): update dependency mkdocs-material to v9.6.9](../pull/949)
- [ ] <!-- rebase-branch=renovate/pytest-8.x -->[chore(deps): update dependency pytest to v8.3.5](../pull/955)
- [ ] <!-- rebase-branch=renovate/pytest-asyncio-0.x -->[chore(deps): update dependency pytest-asyncio to v0.25.3](../pull/897)
- [ ] <!-- rebase-branch=renovate/rtoml-0.x -->[chore(deps): update dependency rtoml to v0.12.0](../pull/899)
- [ ] <!-- rebase-branch=renovate/mypy -->[chore(deps): update mypy](../pull/1026) (`mypy`, `pytest-mypy-plugins`)
- [ ] <!-- rebase-branch=renovate/winamd64-python-3.x -->[chore(deps): update winamd64/python docker tag to v3.13](../pull/824)
- [ ] <!-- rebase-branch=renovate/major-actions -->[chore(deps): update actions (major)](../pull/856) (`actions/cache`, `actions/download-artifact`, `actions/upload-artifact`, `docker/build-push-action`, `geekyeggo/delete-artifact`, `ubuntu`)
- [ ] <!-- rebase-all-open-prs -->**Click on this checkbox to rebase all open PRs at once**
## Detected dependencies
<details><summary>dockerfile</summary>
<blockquote>
<details><summary>tests/Dockerfile</summary>
- `python 3.10-slim-bullseye`
</details>
<details><summary>tests/windows.Dockerfile</summary>
- `winamd64/python 3.11`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v4`
- `actions/setup-python v5`
</details>
<details><summary>.github/workflows/lint-pr.yml</summary>
- `amannn/action-semantic-pull-request v5.5.3`
- `ubuntu 22.04`
</details>
<details><summary>.github/workflows/publish.yml</summary>
- `actions/checkout v4`
- `actions/setup-python v5`
</details>
<details><summary>.github/workflows/test.yml</summary>
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/cache v3`
- `actions/upload-artifact v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/cache v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/cache v3`
- `actions/upload-artifact v3`
- `actions/checkout v4`
- `actions/setup-python v5`
- `actions/download-artifact v3`
- `actions/upload-artifact v3`
- `geekyeggo/delete-artifact v2`
- `actions/checkout v4`
- `docker/setup-qemu-action v3`
- `docker/setup-buildx-action v3`
- `docker/build-push-action v5`
- `actions/checkout v4`
- `python 3.11`
- `python 3.10`
</details>
</blockquote>
</details>
<details><summary>pip_requirements</summary>
<blockquote>
<details><summary>databases/requirements.txt</summary>
- `dirty-equals ==0.8.0`
</details>
<details><summary>pipelines/requirements/all.txt</summary>
</details>
<details><summary>pipelines/requirements/ci.txt</summary>
</details>
<details><summary>pipelines/requirements/coverage.txt</summary>
</details>
<details><summary>pipelines/requirements/deps/coverage-badge.txt</summary>
</details>
<details><summary>pipelines/requirements/deps/coverage.txt</summary>
</details>
<details><summary>pipelines/requirements/deps/lark.txt</summary>
</details>
<details><summary>pipelines/requirements/deps/mypy.txt</summary>
- `mypy ==1.11.1`
</details>
<details><summary>pipelines/requirements/deps/pydantic.txt</summary>
- `pydantic ==2.8.2`
</details>
<details><summary>pipelines/requirements/deps/pyright.txt</summary>
- `pyright ==1.1.377`
</details>
<details><summary>pipelines/requirements/deps/pytest-asyncio.txt</summary>
- `pytest-asyncio ==0.21.1`
</details>
<details><summary>pipelines/requirements/deps/pytest-mock.txt</summary>
</details>
<details><summary>pipelines/requirements/deps/pytest.txt</summary>
- `pytest ==8.1.2`
</details>
<details><summary>pipelines/requirements/deps/ruff.txt</summary>
- `ruff ==0.6.3`
</details>
<details><summary>pipelines/requirements/deps/syrupy.txt</summary>
- `syrupy ==4.8.0`
</details>
<details><summary>pipelines/requirements/dev.txt</summary>
- `nox ==2024.4.15`
- `wheel ==0.44.0`
- `pre-commit ==2.21.0`
- `twine ==5.1.1`
- `typer ==0.12.4`
- `rtoml ==0.9.0`
</details>
<details><summary>pipelines/requirements/docs.txt</summary>
- `mkdocs ==1.5.3`
- `mkdocs-material ==9.5.9`
</details>
<details><summary>pipelines/requirements/integration-prisma-schema-folder.txt</summary>
</details>
<details><summary>pipelines/requirements/lint.txt</summary>
- `slotscheck ==0.19.0`
</details>
<details><summary>pipelines/requirements/mypy.txt</summary>
</details>
<details><summary>pipelines/requirements/node.txt</summary>
- `nodejs-bin ==16.15.1a4`
</details>
<details><summary>pipelines/requirements/test.txt</summary>
- `mock ==5.1.0`
- `pytest-subprocess ==1.5.2`
- `inline-snapshot ==0.12.1`
</details>
<details><summary>pipelines/requirements/typesafety-mypy.txt</summary>
- `pytest-mypy-plugins ==3.1.2`
</details>
<details><summary>pipelines/requirements/typesafety-pyright.txt</summary>
- `pytest-pyright ==0.0.6`
</details>
<details><summary>requirements/base.txt</summary>
- `httpx >=0.19.0`
- `jinja2 >=2.11.2`
- `pydantic >=1.10.0, < 3`
- `click >=7.1.2`
- `python-dotenv >=0.12.0`
- `typing-extensions >=4.5.0`
</details>
<details><summary>requirements/node.txt</summary>
</details>
<details><summary>tests/integrations/custom-generator/requirements.txt</summary>
</details>
<details><summary>tests/integrations/recursive-types/requirements.txt</summary>
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 3misc
|
Title: Fresh install, Compiling mycodo_wrapper upgrade_commands warnings
Body: ### Describe the problem/bug
During a fresh install I receive a couple warnings. Doesn't seem to affect anything, but new users may worry.
### Versions:
- Mycodo Version: 8.8.8
- Raspberry Pi Version: pi-zero
- Raspbian OS Version: Buster Lite
### Additional context
Log snippet
```
#### Compiling mycodo_wrapper
/home/pi/Mycodo/mycodo/scripts/mycodo_wrapper.c: In function ‘upgrade_commands’:
/home/pi/Mycodo/mycodo/scripts/mycodo_wrapper.c:17:5: warning: ‘strncat’ specified bound 255 equals destination size [-Wstringop-overflow=]
strncat(full_cmd, "/upgrade_commands.sh ", sizeof(full_cmd));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/pi/Mycodo/mycodo/scripts/mycodo_wrapper.c: In function ‘main’:
/home/pi/Mycodo/mycodo/scripts/mycodo_wrapper.c:35:4: warning: ‘strncat’ specified bound 255 equals destination size [-Wstringop-overflow=]
strncat(restoreScript, "/upgrade_commands.sh backup-create", sizeof(restoreScript));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
Full install.log https://termbin.com/8o5f6 | 0easy
|
Title: Use of custom error handler is obscure
Body: Not sure if this is intended or otherwise, but just posting my experience -
I am using flask_restplus with payload validation:
```
class resource(Resource):
@ns.expect(payload_model, validate=True)
```
However, I want to change the default output validation json output. From the restplus source code, there seem to be no easy way to overwrite the default output validation message other than changing the source code directly at the `validate` method of the `BaseModel` class.
I found another method which is to use the `errorhandler` provided by flask_restplus to catch the `BadRequest` raised from the `validate` function. However it seems that the error handler always uses the `data` attribute from the raised error regardless of the result I return in the handler function. The only way I got it to work is to delete the data attribute of the raised attribute.
```
@ns.errorhandler(BadRequest)
def handle_generic_exception(error):
data = {'custom_data': ''}
delattr(error, 'data')
return data, 400
``` | 1medium
|
Title: panel.chat.langchain Import recursion error
Body: The new version, 1.5.0, introduces a recursive import caused by this change https://github.com/holoviz/panel/commit/d0744c50c66866396272e056282c3df38f33ecb9#diff-3e786c7bf61e242b5ec4094db9bb91fab89546aadfa12ed11b56ca0c2a142f71R56. `import_module` protected against this.
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
1.5.0
```
</details>
#### Description of expected behavior and the observed behavior
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel
panel.chat.langchain
RecursionError: maximum recursion depth exceeded
``` | 2hard
|
Title: Build Extension Failed with setuptools==77.0.3
Body: ### 🐛 Describe the bug
When build deepspeed wheels with setuptools==77.0.3, the CUDAExtension throw the error info:
```
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 245, in add_defaults
self._add_defaults_ext()
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/command/sdist.py", line 329, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 333, in get_finalized_command
cmd_obj = self.distribution.get_command_obj(command, create)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 885, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 397, in __init__
super().__init__(*args, **kwargs)
File "/opt/python/cp39-cp39/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 402, in __init__
super(BuildExtension, self).__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'use_ninja'
```
### How to Reproduce
the code is from [pytorch](https://github.com/pytorch/pytorch/blob/6bbe8dbd63f25e10ef75252a89ac277feff59ba1/torch/utils/cpp_extension.py#L540)
```python
from distutils.dist import Distribution
from setuptools.command.build_ext import build_ext
class BuildExtension(build_ext):
@classmethod
def with_options(cls, **options):
"""Return a subclass with alternative constructor that extends any original keyword arguments to the original constructor with the given options."""
class cls_with_options(cls): # type: ignore[misc, valid-type]
def __init__(self, *args, **kwargs):
kwargs.update(options)
super().__init__(*args, **kwargs)
return cls_with_options
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.no_python_abi_suffix = kwargs.get("no_python_abi_suffix", False)
self.use_ninja = kwargs.get("use_ninja", True)
d = Distribution()
build_class = BuildExtension.with_options(use_ninja=True)
build_class(d)
```
### Versions
+ The successful with "setuptools<=77.0.1" github actions is [here](https://github.com/AlongWY/deepspeed_wheels/actions/runs/14004770472)
+ The failed github actions is [here](https://github.com/AlongWY/deepspeed_wheels/actions/runs/13989692342)
### Related issue
https://github.com/pypa/setuptools/issues/4908#issue-2940177305
cc @malfet @zou3519 @xmfan | 1medium
|
Title: training method for semantic role labeling
Body: hi, i'm newbie in nlp, What methods and architectures are used for semantic role labeling training?
are u using lstm or another?
thank u | 1medium
|
Title: AttributeError: module 'tiktoken' has no attribute 'encoding_for_model'
Body: Getting the following error when launching the app
`File "C:\Users\xxx\Anaconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "C:\Users\xxx\ask-my-pdf\src\gui.py", line 20, in <module>
import model
File "C:\Users\xxx\ask-my-pdf\src\model.py", line 12, in <module>
import ai
File "C:\Users\xxx\ask-my-pdf\src\ai.py", line 39, in <module>
tokenizer_model = openai.model('text-davinci-003')
File "C:\Users\xxx\Anaconda3\lib\site-packages\ai_bricks\api\openai.py", line 30, in model
return _class(name, **kwargs)
File "C:\Users\xxx\Anaconda3\lib\site-packages\ai_bricks\api\openai.py", line 57, in __init__
self.encoder = tiktoken.encoding_for_model(name)` | 1medium
|
Title: S3Boto3Storage - "PermissionError: Access Denied" persisted when accessing a file before it is created.
Body: Hello,
First off, I just want to say thank you. This package has been extremely helpful for a project that I am working on.
I am using this package to store pandas dataframes to S3, and I have come across a case where a `PermissionError: Access Denied` is persisted if I:
1. attempt to access a file that does not exist `import pandas as pd; pd.read_csv("s3://media_root_dir/test.csv")`
2. later create the file (the process I am using to create a dataframe can be a large computation, so this can be delayed)
3. attempt to access the file again in the same session (step 1)
If I restart my session (Django shell or webserver), I am able to access the dataframe. The implication is that I am serving some dataframes via Django REST Framework, and if I attempt to access the dataframe before it is created, I cannot later access the dataframe after it has actually been created.
I have subclassed `S3BotoStorage` as follows:
```
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
"""
Custom storage for collecting user-generated media files to dedicated
S3 bucket.
"""
bucket_name = settings.AWS_MEDIA_BUCKET_NAME
location = settings.MEDIA_ROOT_DIR
```
Thank you for any assistance!
Sean | 1medium
|
Title: 如何让自动响应和定时服务并存
Body: 如题,我想实现 自动响应回复 和 定时发送新闻 信息的服务
1、按照文档、自动响应回复,只要注册了相关消息方法,即可自动回复,最后通过 itchat.run() 来启动并监听
2、定时发送新闻消息,我是想通一个定时服务(apscheduler)来实现
现在遇到这两个业务线程阻塞的问题
‘’‘
#启动自动回复
itchat.run()
#启动定时服务 ---由于上面的run()会一直在运行,下面这个语句执行不到
sched.start()
‘’‘
遇到这种情况要怎么解决,网上查了,可以用多线程
我试过sched放另外一个线程运行,但这样的话无法通过itchat来发消息 | 1medium
|
Title: unknown bug i cant able to figure it out no detailed info available ...when im trying to migrate this bug is popped always
Body: Traceback (most recent call last):
File "/usr/local/bin/aerich", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/aerich/cli.py", line 258, in main
cli()
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/aerich/cli.py", line 33, in wrapper
loop.run_until_complete(f(*args, **kwargs))
File "/usr/local/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/usr/local/lib/python3.9/site-packages/aerich/cli.py", line 91, in migrate
ret = await command.migrate(name)
File "/usr/local/lib/python3.9/site-packages/aerich/__init__.py", line 115, in migrate
return await Migrate.migrate(name)
File "/usr/local/lib/python3.9/site-packages/aerich/migrate.py", line 130, in migrate
cls.diff_models(cls._last_version_content, new_version_content)
File "/usr/local/lib/python3.9/site-packages/aerich/migrate.py", line 211, in diff_models
table = change[0][1].get("through")
AttributeError: 'str' object has no attribute 'get'
| 2hard
|
Title: Javascript TypeError with renderedMarkdown in streamlit v1.41.1: nt.replaceAll is not a function
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
On occassion we get a JavaScript error which hints to a rendering issue with Markdown. The error can not be reproduced reliably.
Even a simple refresh can fix the error until a later time of the day. Therefore it is hard to reproduce.
The error messages started to occur after an update of the `streamlit` version from `1.36` to `1.41.1`. Other versions inbetween might be effected.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
Error message:
```
TypeError: nt.replaceAll is not a function
at RenderedMarkdown (https://<snip>/static/js/index.Phesr84n.js:442:1744)
at Nh (https:<snip>/static/js/index.Phesr84n.js:46:18935)
at Vk (https:<snip>/static/js/index.Phesr84n.js:48:48583)
at Uk (https:<snip>/static/js/index.Phesr84n.js:48:43849)
at Tk (https:<snip>/static/js/index.Phesr84n.js:48:43778)
at Ik (https:<snip>/static/js/index.Phesr84n.js:48:43622)
at Ek (https:<snip>/static/js/index.Phesr84n.js:48:40437)
at jg (https:<snip>/static/js/index.Phesr84n.js:46:3536)
at gi (https: <snip>/static/js/index.Phesr84n.js:48:37425)
at ii (https:<snip>/static/js/index.Phesr84n.js:46:24857)
```
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version: 3.10
- Operating System: macOS 14.7.2
- Browser: Firefox
### Additional Information
_No response_ | 1medium
|
Title: [BUG] No such file or directory
Body: ### Description
<!--- Describe your bug in detail -->
tc_bert_azureml.ipynb
Error message:
wc: /mnt/batch/tasks/shared/LS_root/jobs/gaia-ml-wks/azureml/3576038c-f399-4522-8674-383ad7cd316b/mounts/workspaceblobstore/azureml/88f85fcf-eaf4-4d58-bff2-712b35cb50aa/train0,: No such file or directory
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
Just run the notebook tc_bert_azureml
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
### Other Comments
| 1medium
|
Title: DateTime columns can not be used
Body: ### Describe the bug
I have simple table with one column that has type DateTime, every time I want to query record that has date set it fails with exception
```
File "lib\sqlalchemy\cyextension\resultproxy.pyx", line 16, in sqlalchemy.cyextension.resultproxy.BaseRow.__init__
File "lib\sqlalchemy\cyextension\resultproxy.pyx", line 73, in sqlalchemy.cyextension.resultproxy._apply_processors
File "lib\sqlalchemy\cyextension\processors.pyx", line 34, in sqlalchemy.cyextension.processors.str_to_datetime
TypeError: fromisoformat: argument must be str
```
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.2,2.0.16
### DBAPI (i.e. the database driver)
pysqlite
### Database Vendor and Major Version
SQLite
### Python Version
3.11
### Operating system
Windows
### To Reproduce
```python
Create table with datetime column, insert record with datetime, query with .all()
```
### Error
```
File "lib\sqlalchemy\cyextension\resultproxy.pyx", line 16, in sqlalchemy.cyextension.resultproxy.BaseRow.__init__
File "lib\sqlalchemy\cyextension\resultproxy.pyx", line 73, in sqlalchemy.cyextension.resultproxy._apply_processors
File "lib\sqlalchemy\cyextension\processors.pyx", line 34, in sqlalchemy.cyextension.processors.str_to_datetime
TypeError: fromisoformat: argument must be str
```
### Additional context
_No response_ | 1medium
|
Title: Add system prompts independently of history of messages
Body: Hello! I wanted to know how to make the @agent.system_prompt decorator being called on every run, when there is history of messages added. I'm basically trying to evaluate the dependencies on each run (the dependencies will change independently of the run and I don't want to use a tool for this because is not needed)
Here is how to reproduce:
```
# where I declare the agent
chat_agent = Agent(
'openai:gpt-4o',
system_prompt="Some prompt",
deps_type=ChatAgentDeps,
)
...
# where I run the agent on each message from the user received
@chat_agent.system_prompt
def get_current_status_of_tasks(ctx: RunContext[ChatAgentDeps]):
print("System Prompt:", ctx.deps.aa_agent.pending_tasks)
print("System Prompt:", ctx.deps.aa_agent.done_tasks)
return f"""Pending Action Items: {ctx.deps.aa_agent.get_pending_tasks()}
Done Action Items: {ctx.deps.aa_agent.get_done_tasks()}"""
result = await chat_agent.run(message, deps=ChatAgentDeps(
aa_agent=session["agent"]
), message_history=session.get("history", []))
session['history'] = result.all_messages()
```
The function get_current_status_of_tasks only gets called once.
Am I doing something wrong here? Is this expected? Thanks in advance! | 1medium
|
Title: loss_weights depending on epoch number
Body: Hi,
I'm trying to train a multi output nn and I need to change the weight of each loss component depending on the epoch number. In some previous versions of keras I implemented this mechanism by defining each weight of the loss_weights parameter as a K.variable() type, and changing the value with K.set_value() in an on_epoch_begin() method of a custom callback.
Now, in keras 3.4 this is not allowed, as loss_weights must be a list/dict of float, so within the callback I can't change the value in place with K.set_value(). Is there something I can do to overcome this issue?
Thanks | 1medium
|
Title: warning bug in Qwen2DecoderLayer in transformers ==4.49
Body: ### System Info
transformers ==4.49
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`class Qwen2DecoderLayer(nn.Module):
def __init__(self, config: Qwen2Config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen2Attention(config=config, layer_idx=layer_idx)
self.mlp = Qwen2MLP(config)
self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
if config.sliding_window and config._attn_implementation != "flash_attention_2":
logger.warning_once(
f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
"unexpected results may be encountered."
)
`
config.sliding_window is a number , the warning active 100%
the code should be config.use_sliding_window ?
### Expected behavior
`class Qwen2DecoderLayer(nn.Module):
def __init__(self, config: Qwen2Config, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen2Attention(config=config, layer_idx=layer_idx)
self.mlp = Qwen2MLP(config)
self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
if config.sliding_window and config._attn_implementation != "flash_attention_2":
logger.warning_once(
f"Sliding Window Attention is enabled but not implemented for `{config._attn_implementation}`; "
"unexpected results may be encountered."
)
`
config.sliding_window is a number , the warning active 100%
the code should be config.use_sliding_window ?
| 1medium
|
Title: Label mismatch in horizontal contributions plot
Body: in the horizontally oriented contribution plot the hover over labels are reversed, this is caused by the following lines in explainer_plots.py
```
if orientation == 'horizontal':
cols = cols[::-1]
values = values[::-1]
contribs = contribs[::-1]
bases = bases[::-1]
fill_colors = fill_colors[::-1]
line_colors = line_colors[::-1]
```
which misses
`hover_text = hover_text[::-1]`
ill open a pull request for the bug fix | 0easy
|
Title: [interface] magnifier: multiline row at cursor only
Body: An idea to only show the current row as multiline, and the rest of the rows as single-line. Not sure if the row display should be stable (such that the rows near the cursor row are obscured) or if they all should scroll.
| 1medium
|
Title: Run Black on master or change flake8 line length
Body: Working on a fix for #257 as my first contribution to tikzplotlib :)
As flake8 is set to `max-line-length = 80` for the project, I set `line-length = 80` in Black locally. I understand from one of the commits on #348 that some code on the project is blackified. Although, several files in `/tikzplotlib` are not blackified(?), which makes for a lot of "changed lines" that are only reformatted (on save). I suggest black is run on all the files to make it easier to contribute.
It might also be the case that the code is indeed blackified at line length 88, in which case the flake8 line length should be changed. | 0easy
|
Title: Change Serial Number Length (more than 50chars)
Body: I've a probleme with serial number in tactical rmm.
I have some esx virtual machine and the serial of this machine have a length of 54 chars.
But tactical have only 50 chars.
Is it possible to change this?
Thanks. | 1medium
|
Title: Completion in indented lines incorrectly strips the prefix when jedi is disabled
Body: Completion in indented lines incorrectly strips the prefix when jedi is disabled.
Originally reported in Notebook (https://github.com/jupyter/notebook/issues/7397) then JupyterLab (https://github.com/jupyterlab/jupyterlab/issues/16522), but narrowed down to IPython 8.8 / 8.11; https://github.com/ipython/ipython/pull/13943 fixed a related issue but did not cover this one.
| 1medium
|
Title: dependency on py
Body: tox has a direct dependency on py which is somewhat unmaintained and has had a dodgy CVE filed against it.
output of `pipenv graph` (but really, the py dep is visible in setup.cfg, too.
```
tox==3.27.0
- colorama [required: >=0.4.1, installed: 0.4.6]
...
- py [required: >=1.4.17, installed: 1.11.0]
- six [required: >=1.14.0, installed: 1.16.0]
```
Pytest also had a dependency on `py` and decided to vendorize the parts they needed. There is a whole video about the issue. https://www.youtube.com/watch?v=aZS3_-y6vsg
The work around is to tell everyone to convince the corporate security team to ignore this CVE, which I guess works but scales poorly. | 1medium
|
Title: Allow "None" style table format
Body: Add an option to allow the user to set a "None" table style for tables.
See the following question on StackOverflow: https://stackoverflow.com/questions/58802092/how-to-create-xlsx-table-without-any-style
| 1medium
|
Title: Is it possible to train a Transformer with Relative Position Self-Attention on TPU?
Body: ### Description
Hi there,
Is it possible to train a Transformer with Relative Position Self-Attention on TPU? If so, what would be the recommended TPU hyper-parameters for it on WMT14 (en-de) dataset? How long would the training take?
### Environment information
Google Cloud TPU
| 1medium
|
Title: Can't Create Daily Point
Body: ## Mycodo Issue Report:
- Specific Mycodo Version: 6.1.0
#### Problem Description
Please list:
- what were you trying to do:
Trying to create a new Conditional: Timer (Daily Point). When you set the time (by using the clock that pops up, or by manually typing) an error is given when you try to save.
- specific setup details that are involved
Hardwired Outputs
### Errors
- List any errors you encountered.
Error: Mod Conditional: Start Time (HH:MM) must be a valid HH:MM time format
- Copy and pasting crash logs, or link to any specific
code lines you've isolated (please use GitHub permalinks for this)
No crash logs.
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Create a new function: Conditional: Timer (Daily Point)
2. Specify the time for the daily point
3. Click Save
### Additional Notes
Is there anything that should be added to make it easier
to address this issue? | 1medium
|
Title: Using VPC endpoint and PandasCursor together
Body: I am accessing Athena from a closed network via a VPC endpoint.
Specifying the URL of the VPC endpoint to `endpoint_url=` works as expected, but it did not work well when used with `PandasCursor`.
I checked code and found that when creating a boto3 client for S3, `endpoint_url=` is also applied, and I suspect that is the cause of the error.
If possible, I would appreciate it if `endpoint_url=` and `PandasCursor` can be used together.
- Python: 3.12.1
- PyAthena: 3.12.2
```
from pyathena import connect
from pyathena.pandas.cursor import PandasCursor
cursor = connect(
work_group='XXXXXXX',
endpoint_url='https://vpce-XXXXXXX.athena.XXXXXXX.vpce.amazonaws.com',
region_name='XXXXXXX').cursor(PandasCursor)
df = cursor.execute('''
SELECT * FROM XXXXXXX.XXXXXXX LIMIT 10
''').as_pandas()
print(df)
```
```
Failed to get content length.
Traceback (most recent call last):
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 434, in _get_content_length
response = retry_api_call(
^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\util.py", line 84, in retry_api_call
return retry(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
Traceback (most recent call last):
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 434, in _get_content_length
response = retry_api_call(
^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\util.py", line 84, in retry_api_call
return retry(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\tmp\sample3.py", line 10, in <module>
df = cursor.execute('''
^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\cursor.py", line 162, in execute
self.result_set = AthenaPandasResultSet(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 143, in __init__
df = self._as_pandas()
^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 386, in _as_pandas
df = self._read_csv()
^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 269, in _read_csv
length = self._get_content_length()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 443, in _get_content_length
raise OperationalError(*e.args) from e
pyathena.error.OperationalError: An error occurred (404) when calling the HeadObject operation: Not Found
``` | 1medium
|
Title: `sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method
Body: ### `sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method
----
This would cause failures in CDP Mode when calling regular SB methods directly. | 1medium
|
Title: 🤔 Issue template ideas
Body: We need an issue template to help with some expectations. Here is my quick riff on what we might want:
----
Our goal with Awesome Django is to highlight packages that we think are awesome and stand out above the rest.
Our goal isn't to be a comprehensive directory of 1000+ projects like [Django Packages](https://djangopackages.org).
We are looking for projects that are:
- relevant to Django
- maintained
- release and support history
- stand out because they are useful and solve a unique problem
- we can't ignore your star count, but we don't have a high number in mind.
What we are NOT looking out for:
- unmaintained projects
- promote your project, service, or employer
----
- [ ] What makes this product awesome?
- [ ] Are you the author or a maintainer? (no points off for self-promotion)
- [ ] If your project is brand new, we don't have a minimum number of GH stars, but your project needs "enough" stars.
- [ ] Is this project maintained?
- [ ] If your project is published on PyPI, is there a history/pattern of keeping it updated?
- [ ] If your project/service is a paid product, do you work for the same company? (emphasis on disclosure vs. promoting your product/service/company)
- [ ] Django and Python are trademarked by their respective foundations, if your product, paid service, and/or domain name use these trademarks, do you have their permission to do so?
| 3misc
|
Title: [BUG] dcc.Dropdown width rendering incorrect with Dash 3 rc4
Body: **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 3.0.0rc4
dash-core-components 2.0.0
dash_design_kit 1.14.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS] MacOS
- Browser [e.g. chrome, safari] Chrome & Safari
- Version [e.g. 22] `Version 134.0.6998.88` Arm64
**Describe the bug**
`dcc.Dropdown` renders squashed with Dash 3, whereas it renders at full-width with Dash 2.x
**Expected behavior**
The `dcc.Dropdown` should render the same way between Dash 2.x and Dash 3.x, if there have been no code changes in the app.
**Screenshots**
Dash 3.0

Dash 2.0

| 1medium
|
Title: fix `IsNow`
Body: The following should pass
```py
assert '2022-07-15T10:56:38.311Z' == IsNow(delta=10, tz='utc', format_string='%Y-%m-%dT%H:%M:%S.%fZ', enforce_tz=False)
```
(ignoring that that's not "now" any longer obvisouly) | 0easy
|
Title: CoarseDropout does not work with relative hole_height_range and hole_width_range
Body: ## Describe the bug
In contrast to the docstring of CoarseDropout, the two parameters `hole_height_range` and `hole_width_range` cannot be given as relative values. This is most probably due to a Validator which was introduced recently (1.4.23) for these two parameters: `AfterValidator(check_range_bounds(1, None))` in `class InitSchema(BaseDropout.InitSchema)`
### To Reproduce
Albumentations >= 1.4.23
```
augmentation = CoarseDropout(
hole_width_range=(0.25, 0.5),
hole_height_range=(0.5, 0.75),
num_holes_range=(1, 100),
p=1.0,
)
```
### Expected behavior
According to the docstring this should work.
### Actual behavior
Error:
```pydantic_core._pydantic_core.ValidationError: 2 validation errors for InitSchema
hole_height_range
Value error, All values in (0.5, 0.75) must be >= 1 [type=value_error, input_value=(0.5, 0.75), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
hole_width_range
Value error, All values in (0.25, 0.5) must be >= 1 [type=value_error, input_value=(0.25, 0.5), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
```
| 1medium
|
Title: QST:Using geopandas in flask, the program ends directly
Body: - [x] I have searched the [geopandas] tag on [StackOverflow](https://stackoverflow.com/questions/tagged/geopandas) and [GIS StackExchange](https://gis.stackexchange.com/questions/tagged/geopandas) for similar questions.
- [ ] I have asked my usage related question on [StackOverflow](https://stackoverflow.com) or [GIS StackExhange](https://gis.stackexchange.com).
--- my question
A simple code read shapefile, if i use it in python env, **it can show out the result**;
like this

**but** when i put it into **flask** program,when i use flask route , the program ends directly,
Here is my flask use code
`
@DataTransformation.route('/TransShape2Jsonfile',methods=['GET','POST'])
def TransShape2Jsonfile():
s2g.TransShapefile2JsonFile()
return 'Data Trans Over'
`

`gpd.read_file(FilePath, encoding='gbk')`
After testing, it is found that it is **read_ The file()** function cannot be run. When the program runs here, it will terminate directly
```
| 1medium
|
Title: [BUG] Cannot use self-signed url in OpenAI Model
Body: ### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Databricks
### MLflow version
- Client: 2.19.0
- Openai 1.40.2
### System information
- Databrics Runtime 16.0 ML (includes Apache Spark 3.5.0, Scala 2.12)
### Describe the problem
OpenAI deployment cannot access urls signed by private CA.
The instance of `AzureOpenAI`, as seen bellow, do not allow configuration of custom certificates for http client:
https://github.com/mlflow/mlflow/blob/3c86c188dbd76c614373f96cb3c03871458aba9c/mlflow/openai/__init__.py#L674-L683
The suggested method, [provided by OpenAI](https://github.com/openai/openai-python?tab=readme-ov-file#configuring-the-http-client), is this configuration:
```python
import httpx
from openai import OpenAI, DefaultHttpxClient
client = OpenAI(
# Or use the `OPENAI_BASE_URL` env var
base_url="http://my.test.server.example.com:8083/v1",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
My suggestion is to add a new configuration, `OPENAI_SSL_VERIFY`. The implementation should be based on this:
```python
from openai import AzureOpenAI, DefaultHttpxClient
# Note DefaultHttpxClient is a just a proxy to httpx.Client
http_client = DefaultHttpxClient(verify=self.api_config.ssl_verify)
return AzureOpenAI(
api_key=self.api_token.token,
azure_endpoint=self.api_config.api_base,
api_version=self.api_config.api_version,
azure_deployment=self.api_config.deployment_id,
max_retries=max_retries,
timeout=timeout,
http_client=http_client
)
```
### Tracking information
```
System information: Linux #84~20.04.1-Ubuntu SMP Mon Nov 4 18:58:41 UTC 2024
Python version: 3.12.3
MLflow version: 2.19.0
MLflow module location: /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/__init__.py
Tracking URI: databricks
Registry URI: databricks
Databricks runtime version: 16.0
MLflow environment variables:
MLFLOW_CONDA_HOME: /databricks/conda
MLFLOW_DEPLOYMENTS_TARGET: databricks
MLFLOW_GATEWAY_URI: databricks
MLFLOW_PYTHON_EXECUTABLE: /databricks/spark/scripts/mlflow_python.sh
MLFLOW_TRACKING_URI: databricks
MLflow dependencies:
Flask: 2.2.5
Jinja2: 3.1.4
aiohttp: 3.9.5
alembic: 1.14.0
azure-storage-file-datalake: 12.17.0
boto3: 1.34.69
botocore: 1.34.69
docker: 7.1.0
google-cloud-storage: 2.10.0
graphene: 3.4.3
gunicorn: 20.1.0
langchain: 0.2.12
markdown: 3.4.1
matplotlib: 3.8.4
mlflow-skinny: 2.19.0
numpy: 1.26.4
pandas: 1.5.3
pyarrow: 15.0.2
pydantic: 2.8.2
scikit-learn: 1.4.2
scipy: 1.13.1
sqlalchemy: 2.0.30
tiktoken: 0.7.0
virtualenv: 20.26.2
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```python
import os
import openai
import mlflow
os.environ["AZURE_OPENAI_API_KEY"] = "xxxxxxx" # Any value
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://self-signed.badssl.com/" # Any self-signed url, like this one
os.environ["OPENAI_API_VERSION"] = "2024-06-01"
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_DEPLOYMENT_NAME"] = "text-embedding-3-small"
print(mlflow.__version__, openai.__version__) # Yields ('2.19.0', '1.40.2')
with mlflow.start_run():
model_info = mlflow.openai.log_model(
model="text-embedding-3-small",
task=openai.embeddings,
artifact_path="model"
)
# Fix error when model has no openai key... but this is another bug
os.environ["OPENAI_API_KEY"] = os.environ['AZURE_OPENAI_API_KEY']
# Load the model in pyfunc format
model = mlflow.pyfunc.load_model(model_info.model_uri)
# This will rise an Request #0 failed with: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
results = model.predict([
"This is a test"
])
# We can download self-signed certificate... but how to use it?
# echo "" | openssl s_client -connect self-signed.badssl.com:443 -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p'
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
APIConnectionError('Connection error.')Traceback (most recent call last):
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
yield
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 236, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 154, in _connect
stream = stream.start_tls(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 152, in start_tls
with map_exceptions(exc_map):
File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/databricks/python/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 972, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 926, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 954, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 1027, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 235, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/openai/_openai_autolog.py", line 181, in patched_call
raw_result = original(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 573, in call_original
return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 508, in call_original_fn_with_event_logging
original_fn_result = original_fn(*og_args, **og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 570, in _original_fn
original_result = original(*_og_args, **_og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/resources/embeddings.py", line 114, in create
return self._post(
^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 936, in request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1006, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [X] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [X] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [X] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [X] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations | 1medium
|
Title: visualization of only 3 layers / example model_view_xlnet.ipynb
Body: I tried load XLNet only with three layers (it does work with full XLNet) but with three the example model_view_xlnet.ipynb does not work
```
config = XLNetConfig.from_pretrained('/transformers/')
config.n_layer = 3
config.num_labels = 3
model = XLNetModel.from_pretrained('/transformers/')
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-7c9c3356caa4> in <module>
17 input_id_list = input_ids[0].tolist() # Batch index 0
18 tokens = tokenizer.convert_ids_to_tokens(input_id_list)
---> 19 model_view(attention, tokens)
~/projects/bertviz/bertviz/model_view.py in model_view(attention, tokens, sentence_b_start, prettify_tokens)
78 attn_seq_len = len(attn_data['all']['attn'][0][0])
79 if attn_seq_len != len(tokens):
---> 80 raise ValueError(f"Attention has {attn_seq_len} positions, while number of tokens is {len(tokens)}")
81 display(Javascript('window.params = %s' % json.dumps(params)))
82 display(Javascript(vis_js))
ValueError: Attention has 768 positions, while number of tokens is 14
``` | 2hard
|
Title: Automatic translation add-on setting missing "Add as approved translation"
Body: ### Describe the issue
When configuring the automatic translation using the add-on, there is no option to choose "Add as approved translation" in the "Automatic translation mode" dropdown
<img width="874" alt="Image" src="https://github.com/user-attachments/assets/cf1369cd-4c9b-4179-ba43-8b770706325d" />
However, the same is available when automatic translation is selected from `Tools > Automatic Translation`
<img width="1168" alt="Image" src="https://github.com/user-attachments/assets/2c03afef-2e05-48ec-91e1-b337265f6a7c" />
### I already tried
- [x] I've read and searched [the documentation](https://docs.weblate.org/).
- [x] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Go to Add-ons
2. Install "Automatic Translation" add-on
3. Configure and open dropdown "Automatic translation mode"
### Expected behavior
To have option "Add as approved translation"
### Screenshots
_No response_
### Exception traceback
```pytb
```
### How do you run Weblate?
Docker container
### Weblate versions
5.9.2
### Weblate deploy checks
```shell
```
### Additional context
_No response_ | 1medium
|
Title: [ONNX] Improve onnx ops docs
Body: https://pytorch.org/docs/main/onnx_ops.html
Improve example to show the onnx op being used with torch ops. | 1medium
|
Title: `num_ops` argument of `RandAugment()` shouldn't accept negative values
Body: ### 🐛 Describe the bug
Setting negative values to `num_ops` argument of [RandAugment()](https://pytorch.org/vision/master/generated/torchvision.transforms.v2.RandAugment.html) doesn't do augmentation transformations at all as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import RandAugment
my_data = OxfordIIITPet(
root="data",
transform=RandAugment(num_ops=-1)
# transform=RandAugment(num_ops=-10)
# transform=RandAugment(num_ops=-100)
)
my_data[0][0]
```

And, `num_ops` argument is the number of augmentation transformations according to [the doc](https://pytorch.org/vision/master/generated/torchvision.transforms.v2.RandAugment.html) as shown below:
> Parameters:
> - num_ops ([int](https://docs.python.org/3/library/functions.html#int), optional) – Number of augmentation transformations to apply sequentially
So `num_ops` argument of `RandAugment()` shouldn't accept negative values.
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | 1medium
|
Title: Map Disappearing
Body: This is a really great plugin!
With that said, I'm encountering an issue where when I try to render a map, it disappears and reappears inconsistently. See video below.
https://www.loom.com/share/79e544d6eea74fd7a898691d217d12a0?sid=d25745ae-b4ea-47a3-8d89-80621fe7c652
For this example, Here are the relevant code sections
```python
def get_pricing():
"""
This function gets all the pricing information from Prycd and updates the pricing dataframe
:return:
"""
global prycd
apn = apn_textarea
# Check for empty fields
if apn is None or len(apn) == 0:
st.error("Please enter a valid APN.")
return
county = county_selectbox
if county is None or len(county) == 0:
st.error("Please select a valid county and state.")
return
fips = __get_fips_code(county)
md = f"""
## Pricing Data
The table below displays the pricing for a property located in {county} with assessor's property number {apn}.
"""
st.markdown(md)
pricing_results = __get_pricing_info(apn, fips)
st.dataframe(data=pricing_results,
use_container_width=True,
hide_index=True,
column_config={
"price": st.column_config.NumberColumn(label="Price", format="$%.2f"),
"price_per_acre": st.column_config.NumberColumn(label="Price Per Acre", format="$%.2f"),
"confidence": st.column_config.Column(label="Confidence"),
"meta.county": st.column_config.Column(label="County", help="The county where the property is located."),
"meta.confidence.Price Coefficient of Variation": st.column_config.NumberColumn(label="Coefficient of Variation"),
"meta.confidence.Acreage Coefficient of Variation": st.column_config.NumberColumn(
label="Acreage Coefficient of Variation"),
"meta.confidence.Number of Total Comps": st.column_config.NumberColumn(
label="Total Comps"),
"meta.confidence.Number of Sold Comps": st.column_config.NumberColumn(
label="Sold Comps")
})
m = folium.Map(location=[39.949610, -75.150282], zoom_start=16)
folium.Marker(
[39.949610, -75.150282], popup="Liberty Bell", tooltip="Liberty Bell"
).add_to(m)
# call to render Folium map in Streamlit
st_data = st_folium(m, width=725)
# Now populate the comps
min_acreage = comp_size_range[0]
max_acreage = comp_size_range[1]
comps_results = pd.DataFrame(__get_comps(county, state_selectbox, min_acreage, max_acreage))
# If there are no comps, display a message explaining that and halt.
if len(comps_results) == 0:
st.warning("No comps were found meeting this criteria.")
return
...
# This function is called by a button click in a sidebar here...
with st.sidebar:
....
submit_button = st.button("Submit", on_click=get_pricing)
```
Any help would be greatly appreciated. I've confirmed the this issue exists in both Firefox and Safari. When I don't include the folium map, the page loads as expected.
| 1medium
|
Title: Increase number of bibliographical entries in glossary
Body: I think it would be good if we included a few more citations in the glossary. In particular, it would be good to have a few references to trace the history of persistent homology. A few such are listed in the first page of https://www.maths.ed.ac.uk/~v1ranick/papers/edelhare.pdf, and an even earlier precursor is the 1994 paper by Barannikov [The Framed Morse complex and its invariants](https://hal.archives-ouvertes.fr/hal-01745109/document). | 1medium
|
Title: 麻烦给CRFSegmenter添加流的读取方式,方便直接从jar包里直接读取bin文件
Body: <!--
提问请上论坛,不要发这里!
提问请上论坛,不要发这里!
提问请上论坛,不要发这里!
以下必填,否则直接关闭。
-->
**Describe the feature and the current behavior/state.**
**Will this change the current api? How?**
**Who will benefit with this feature?**
**Are you willing to contribute it (Yes/No):**
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version:
**Any other info**
* [x] I've carefully completed this form.
<!-- 发表前先搜索,此处一定要勾选! -->
<!-- 发表前先搜索,此处一定要勾选! -->
<!-- 发表前先搜索,此处一定要勾选! --> | 1medium
|
Title: TiDE Model Stops At A Specific Epoch
Body: A more general question, I am trying to run a historical backtest using TiDE model for my use case:
```
from darts.models import TiDEModel
tide_model = TiDEModel(
input_chunk_length=8,
output_chunk_length=3,
n_epochs=20
)
tide_model .fit(
series =...,
past_covariates= ...
future_covariates= ...
)
tide_hf_results = model_estimator.historical_forecasts(
...
)
```
For some reason, the model always stalls at a specific point (77% of Epoch 5). I can see that the kernel is still running under the hood but the progress bar will no longer continue moving. I have tried increasing the memory and CPU by 3x but still, the model would stall at exactly the same point. Not sure if anyone have met this issue before and have any suggested solutions.
No error messages are returned at all so I am not sure how to debug the issue.

| 1medium
|
Title: Support pandas 2
Body: **🚀 Feature Description**
Currently, the pandas requirement is constrained to >=1.4,<2.0:
https://github.com/coqui-ai/TTS/blob/11ec9f7471620ebaa57db7ff5705254829ffe516/requirements.txt#L26
[Pandas 2.0.0 was released in April](https://github.com/pandas-dev/pandas/releases/tag/v2.0.0), so this will start to result in dependency conflicts with other libraries that require pandas >= 2.0.
**Solution**
Loosen pandas requirement to support pandas 2
| 1medium
|
Title: Can your openmoe project train mixtral 8x7b model ?[FEATURE]:
Body: ### Describe the feature
I want to know if your openmoe project can train other moe_models like mixtral ? And in your openmoe.md , I cannot find this checkpoint in nvidia-apex "git checkout 741bdf50825a97664db08574981962d66436d16a" | 2hard
|
Title: Rewrite Cypress test servers using gunicorn
Body: The table currently uses a set of independent Dash applications to run its end-to-end tests against.
https://github.com/plotly/dash-table/tree/master/tests/cypress/dash
https://github.com/plotly/dash-table/blob/master/package.json#L25
Use gunicorn to load all the test apps into a single app with routes instead. | 2hard
|
Title: 关于打分机制的疑问
Body: 以code测试里面的十二题为例
https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples/CODE.md
十二题的三个答案中,7B和7B plus得分同样是7分,他们的回答文字数量接近,但信息量完全不同,后者提到了GDB和valgrind工具这个对于内存问题的排查十分有帮助,显然是比7B和13B更优的答案。
这里的问题是我们不能根据自己的常识,也不能根据文字长度,或者条理性去评判。如何建立科学的评分体系很重要,否则无法判断模型效果到底是变好还是变坏 | 1medium
|
Title: New installation does not migrate existing users
Body: # Description
When installing into a project, the package functionality does not work with any of the users that already existed before the package was added. Presumably because the existing users aren't added into the `graphql_auth_userstatus` table.
Not sure if this is a bug or new request
# Steps to Reproduce
If we need to reproduce and you don't provide steps for it, it will be closed. Alternatively, you can link a repo with the code to run your issue.
1. Have existing users registered without `django-graphql-auth`
2. Install `django-graphql-auth` and run migration
3. Attempt to use any of the functions (i.e., verifyToken)
## Expected behavior
Attempting to use any of the functions with an existing user should work without any extra steps (or at least minimal extra steps via some sort of management command). Works fine with users created after the fact.
## Actual behavior
Nothing happens when attempting to use any of the package features for existing users. (i.e., `verifyToken` returns null for token, `sendPasswordResetEmail` does nothing).
# Requirements
```
aioredis==1.3.1
aniso8601==7.0.0
asgiref==3.2.10
asn1crypto==1.4.0
astroid==2.4.2
async-timeout==3.0.1
attrs==20.2.0
autobahn==20.7.1
Automat==20.2.0
bcrypt==3.2.0
blessed==1.17.10
boto3==1.14.61
botocore==1.17.61
cached-property==1.5.1
cement==3.0.4
certifi==2020.6.20
cffi==1.14.2
channels==2.4.0
channels-redis==3.1.0
chardet==3.0.4
colorama==0.4.3
constantly==15.1.0
cryptography==3.1
daphne==2.5.0
distro==1.5.0
Django==3.1.1
django-cleanup==5.1.0
django-cors-headers==3.5.0
django-filter==2.4.0
django-graphql-auth==0.3.12
django-graphql-jwt==0.3.0
django-guardian==2.3.0
django-polymorphic==3.0.0
django-storages==1.10
djangorestframework==3.11.1
docker==4.3.1
docker-compose==1.27.2
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
docutils==0.16
graphene==2.1.8
graphene-django==2.13.0
graphene-file-upload==1.2.2
graphql-core==2.3.1
graphql-relay==2.0.1
graphql-ws==0.3.0
hiredis==1.1.0
hyperlink==20.0.1
idna==2.10
importlib-metadata==1.7.0
incremental==17.5.0
isort==5.5.2
jmespath==0.9.4
jsonschema==3.1.1
lazy-object-proxy==1.4.2
mccabe==0.6.1
more-itertools==7.2.0
msgpack==0.6.2
paramiko==2.6.0
pathspec==0.6.0
pbr==5.4.3
Pillow==7.2.0
promise==2.3
psycopg2==2.8.6
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyHamcrest==1.9.0
PyJWT==1.7.1
pylint==2.4.4
PyNaCl==1.3.0
pyOpenSSL==19.1.0
pyrsistent==0.15.4
python-dateutil==2.7.5
python-dotenv==0.14.0
pytz==2018.5
PyYAML==4.2b1
requests==2.20.0
Rx==1.6.1
s3transfer==0.3.3
semantic-version==2.5.0
sentry-sdk==0.17.5
service-identity==18.1.0
singledispatch==3.4.0.3
six==1.14.0
sqlparse==0.3.1
stevedore==1.30.1
stripe==2.43.0
termcolor==1.1.0
texttable==0.9.1
Twisted==20.3.0
txaio==18.8.1
typed-ast==1.4.0
Unidecode==1.1.1
urllib3==1.24.3
wcwidth==0.1.7
websocket-client==0.57.0
wrapt==1.12.1
zipp==3.1.0
zope.interface==5.0.0
``` | 1medium
|
Title: font "sans serif" no longer working
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When adding a `config.toml` with the font set to `"sans serif"` explicitly, as suggested [in the documentation](https://docs.streamlit.io/develop/concepts/configuration/theming#font), a **serif** font is used instead.
```
[theme]
font = "sans serif"
```
This can be fixed by
* omitting the `font` since sans-serif it is the default
* using `"sans-serif"` (with a dash)
It seems as if the font name was changed from `"sans serif"` to `"sans-serif"` without a matching change in the documentation.
This change seems to have been introduced in version 14.3.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | 0easy
|
Title: Cannot connect SQL which SQL password has '%'
Body: # Expected Behavior
Hi, I want to use flask-sqlalchemy to connect my SQL database, but I cannot connect to SQL, because the SQL database's password is like "qU%d6".
It quite strange because I can connect to SQL normally using a password like "password".
Changing the SQL password may solve the problem, but this bug should be fixed too I think.
```python
DIALECT = 'mysql'
DRIVER = 'mysqldb'
USERNAME = 'xxx'
PASSWORD = 'qU%d6'
HOST = 'xxx'
PORT = 'xxx'
DATABASE = 'xxx'
SQLALCHEMY_DATABASE_URI = "{}+{}://{}:{}@{}:{}/{}?charset=utf8".format(DIALECT, DRIVER,
USERNAME, PASSWORD, HOST, PORT, DATABASE)
```
### Actual Behavior
The error is: UnicodeDecodeError: 'ascii' codec can't decode byte 0xd6 in position 3: ordinal not in range(128), so I cannot connect to the SQL database.
I have tried:
- use u'qU%d6'
- use r'qU%d6'
- change % to %%
All failed!
```pytb
Traceback (most recent call last):
File "sqldemo.py", line 37, in <module>
db.create_all() # 真正建立模型到数据库
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 1039, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 1031, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 962, in get_engine
return connector.get_engine()
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 556, in get_engine
self._engine = rv = self._sa.create_engine(sa_url, options)
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 972, in create_engine
return sqlalchemy.create_engine(sa_url, **engine_opts)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 500, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 98, in create
(cargs, cparams) = dialect.create_connect_args(u)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 184, in create_connect_args
database="db", username="user", password="passwd"
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 216, in translate_connect_args
if name is not None and getattr(self, sname, False):
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 134, in password
return util.text_type(self.password_original)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd6 in position 3: ordinal not in range(128)
```
### Environment
* Python version:2.7
* Flask-SQLAlchemy version:2.4.4
* SQLAlchemy version:1.3.18
| 1medium
|
Title: LongestMaxSize does upscale an image in contrast to what notes state(?)
Body: Regarding the LongestMaxSize transformation the notes state that:
> Note:
> - If the longest side of the image is already less than or equal to max_size, the image will not be resized.
> - This transform will not crop the image. The resulting image may be smaller than max_size in both dimensions.
> - For non-square images, the shorter side will be scaled proportionally to maintain the aspect ratio.
In contrast, images **seem to be upscaled** eventhough their maximum size is smaller than the defined max_size attribute:
```
def _func_max_size(img: np.ndarray, max_size: int, interpolation: int, func: Callable[..., Any]) -> np.ndarray:
image_shape = img.shape[:2]
scale = max_size / float(func(image_shape))
if scale != 1.0:
new_height, new_width = tuple(round(dim * scale) for dim in image_shape)
return resize(img, (new_height, new_width), interpolation=interpolation)
return img
```
Is this expected behaviour? Does their exist a transformation that does not upscale the image but resizes (downscales) while maintaining the aspect ratio? | 1medium
|
Title: Capture Javascript errors (javascript console)
Body: it is possible capture any text from javascript console?
I need to have some way to get Javascript errors.
| 1medium
|
Title: [PR] Post events for cluster-scoped objects to current namespace
Body: > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-08-05 18:06:51+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/165
> Merged by [nolar](https://github.com/nolar) at _2019-08-07 17:45:11+00:00_
K8s-events cannot be posted cluster-scoped (?), and attached to cluster-scoped resources via API. However, we post them to the current namespace — so that they are not lost completely.
> Issue : #164
## Description
See #164 for details.
Brief summary: K8s events are namespaced, there are no cluster-scoped events. Also, k8s events can refer to an object via spec.involvedObject of type ObjectReference. This structure contains namespace field, to refer to the involved object's namespace (also, name, uid, etc).
I could not find a way to post namespaced events for cluster-scoped resources. It always fails with namespace mismatch (regardless of which library is used). Event via `curl` — see the issue comments.
So, we post the k8s-events to the current namespace, so that they are available via `kubectl get events`, despite they are not seen in `kubectl describe` on the involved objects.
It is not a full problem solution, but it is a bit better than just losing them completely.
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
| 1medium
|
Title: [Migrated] Overhaul Logging Subsystem
Body: Originally from: https://github.com/Miserlou/Zappa/issues/1305 by [Miserlou](https://github.com/Miserlou)
- Add and document proper log-level handling
- Redefine more sane defaults
- Add event-type and event-target based log tail filtering | 1medium
|
Title: Allow S3ObjectDatanodes to use all parameters exposed by AWS APIs
Body: ### Description
Today, an s3 object data node can only use a limited number of parameters.
Taipy should accept all possible parameters for configuring the [boto3 client](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.client), for reading the data with the [get_object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/get_object.html#get-object) method, and for writing the data with the [put_object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/put_object.html#put-object) method.
### Solution Proposed
**DatanodeConfig:**
The parameters should be passed by the user through the `DataNodeConfig` method
`taipy.core.config.data_node_config._configure_s3_object`.
The goal is to "keep simple things simple", in particular for most common usages.
So, the purpose is not to expose all the parameters in the configure method, but only the main ones. The others should be passed as optional parameters in kwargs properties.
**Datanode:**
All the parameters (usedfor the client constructor, the get_object, and the put_object methods ) should be used.
### Acceptance Criteria
- [ ] Ensure the new code is unit tested, and check that the code coverage is at least 90%.
- [ ] Create related issues in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 1medium
|
Title: Add (optional) Black pre-commit hook
Body: | 1medium
|
Title: The SRL predictor doesn't work with the following error message
Body: <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x ] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x ] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x ] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x ] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x ] I have included in the "Environment" section below the output of `pip freeze`.
- [x ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
The error:
```
Traceback (most recent call last):
File "/home/skhanehzar/DeployedProjects/Narrative/pipeline/srl.py", line 112, in <module>
"https://storage.googleapis.com/allennlp-public-models/structured-prediction-srl-bert.2020.12.15.tar.gz")
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 275, in from_path
load_archive(archive_path, cuda_device=cuda_device),
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/archival.py", line 197, in load_archive
opt_level=opt_level,
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/model.py", line 398, in load
return model_class._load(config, serialization_dir, weights_file, cuda_device, opt_level)
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/model.py", line 337, in _load
model.load_state_dict(model_state)
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SrlBert:
Unexpected key(s) in state_dict: "bert_model.embeddings.position_ids".
Process finished with exit code 1
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.7.9
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
(narrative) skhanehzar@slug:~$ pip freeze
allennlp==1.0.0
allennlp-models==1.0.0
attrs==20.3.0
blis==0.4.1
boto3==1.17.8
botocore==1.20.8
cached-property==1.5.2
catalogue==1.0.0
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
conllu==3.0
cymem==2.0.5
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
filelock==3.0.12
future==0.18.2
h5py==3.1.0
idna==2.10
importlib-metadata==3.4.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==1.0.1
jsonnet==0.17.0
jsonpickle==2.0.0
murmurhash==1.0.5
nltk==3.5
numpy==1.20.1
overrides==3.0.0
packaging==20.9
plac==1.1.3
pluggy==0.13.1
preshed==3.0.5
protobuf==3.14.0
py==1.10.0
py-rouge==1.1
pyparsing==2.4.7
pytest==6.2.2
python-dateutil==2.8.1
regex==2020.11.13
requests==2.25.1
s3transfer==0.3.4
sacremoses==0.0.43
scikit-learn==0.24.1
scipy==1.6.0
sentencepiece==0.1.95
six==1.15.0
spacy==2.2.4
srsly==1.0.5
tensorboardX==2.1
thinc==7.4.0
threadpoolctl==2.1.0
tokenizers==0.7.0
toml==0.10.2
torch==1.5.1
tqdm==4.56.2
transformers==2.11.0
typing-extensions==3.7.4.3
urllib3==1.26.3
wasabi==0.8.2
word2number==1.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
Run the following code with python interpreter
```
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path(
"https://storage.googleapis.com/allennlp-public-models/structured-prediction-srl-bert.2020.12.15.tar.gz")
pp = predictor.predict(
sentence="Did Uriah honestly think he could beat the game in under three hours?."
)
```
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
See steps to reproduce above
</p>
</details>
| 1medium
|
Title: Type unions errors aren't showed in frontend
Body: Imagine my form is:
```python
class Form(BaseModel):
email: EmailStr | int
```
I know it doesn't make sense to have that union, but this is to just replicate the issue. Then on runtime, when I'm passing an invalid email address, I get this error:
```json
{
"detail": {
"form": [
{
"type": "value_error",
"loc": [
"email",
"function-after[_validate(), str]"
],
"msg": "value is not a valid email address: The part after the @-sign is not valid. It should have a period."
},
{
"type": "int_parsing",
"loc": [
"email",
"int"
],
"msg": "Input should be a valid integer, unable to parse string as an integer"
}
]
}
}
```
Meanwhile, with None, that won't happen. loc would be an array with only "email".

As you can see, no error happened in first scenario with Union of int and email string, but in second scenario it was parsed correctly with Union of string email and None.

| 1medium
|
Title: automatic fake audio generation
Body: Hi Corentin,
I want to generate a dataset of fake audios on my own using this toolbox. Is there any way to generate them automatically as I have to generate them manually one by one which is taking too long?
| 2hard
|
Title: No solution for Directories Comparison Shell Script Problem
Body: The [Directories Comparison shell scripting practice problem](https://github.com/bregman-arie/devops-exercises/blob/master/exercises/shell/directories_comparison.md) does not have a solution | 1medium
|
Title: How can trame be deployed in an environment with no desktop or graphical interface, such as k8s
Body: <!-- Ignoring this template may result in your bug report getting deleted -->
How can trame be deployed in an environment with no desktop or graphical interface, such as k8s
[x] unbuntu
[x] arm64
| 1medium
|
Title: ENH: `DatetimeIndex.set_freq()`
Body: ### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I can set/change the name of a `DatetimeIndex` with `.rename()`. But I cannot set/change its frequency in the same manner.
### Feature Description
To rename a `DatetimeIndex`, I can do this inplace with `idx.name = 'foo'`. Or I can get a new object with `idx2 = idx.rename('foo')`.
I can set or change the frequency inplace with `idx.freq = 'QS-APR'`, but an analogous method for setting or changing the frequency does not exist.
**This proposal is to add the method `DatetimeIndex.set_freq`**
Considering the method name: `with_freq()` or `set_freq()` would both work. I would not use `as_freq()` to avoid confusion with the existing methods `Series.as_freq()` and `DataFrame.as_freq()` which have a different functionality (i.e., change the index length).
The method body would be something like
```py
def set_freq(self, freq, *, inplace: bool = False) -> Self | None:
if inplace:
self.freq = freq
else:
idx = self.copy()
idx.freq = freq
return idx
```
I'm happy to create a PR for this if devs think this is a worthwhile addition
### Alternative Solutions
I can keep on using
```py
idx2 = idx.copy()
idx2.freq = freq
```
but that cannot be used in list comprehensions or lambda expressions and is not chainable, and looks more clunky.
If I need something chainable, the best I think I can do is
```py
idx2 = idx.to_frame().asfreq(freq).index
```
though that undeservedly raises an Exception if the frequencies are equivalent (e.g. QS-FEB and QS-MAY).
### Additional Context
See also https://github.com/pandas-dev/pandas/issues/61086 | 1medium
|
Title: function `load_dataset` can't solve folder path with regex characters like "[]"
Body: ### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular expressions. As a result, the globbing mechanism interprets these characters as regex patterns, leading to a traversal of the entire disk partition instead of confining the search to the intended directory.
### Steps to reproduce the bug
just create a folder like `E:\[D_DATA]\koch_test`, then `load_dataset("parquet", data_dir="E:\[D_DATA]\\test", split="train")`
it will keep searching the whole disk.
I add two `print` in `glob` and `resolve_pattern` to see the path
### Expected behavior
it should load the dataset as in normal folders
### Environment info
- `datasets` version: 3.3.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.16
- `huggingface_hub` version: 0.29.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | 1medium
|
Title: Issue with dates format in request options
Body: Hello,
Sorry if I am in the wrong place I am new here.
I have an issue when I use the csv_req_option with dates. The date filter isn't applied when I pass it in argument whereas it works for all kinds of string.
Imagine the next dashboard :
Col A = Date
Col B = Event
if I use
csv_req_option = TSC.CSVRequestOptions()
csv_req_option.vf('any Event')
so far everything works
now
csv_req_option = TSC.CSVRequestOptions()
csv_req_option.vf('01/01/2021')
it doesn't work the filter isn't applied
I have tried different date formats but it didn't work...
Thanks for your help.
Best | 1medium
|
Title: 403 and 404 Errors Still Persist When Querying Usernames
Body: ### Installation method
PyPI (via pip)
### Description
When I query a username, 403 and 404 errors are still being reported
### Steps to reproduce

"When I query a username, 403 and 404 errors are still being reported."
And usernames that should have information, such as 'X', are not being found in the query results.
### Additional information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1medium
|
Title: [🐛 BUG] inactive file_selector is still active
Body: ### What went wrong? 🤔
From hugoo on Discord
I would like to leave a file_selector inactive so that the user cannot use it. However, doing active=False only has style effects and the file_selector continues to work
### Expected Behavior
inactive file_selector should be inactive
### Steps to Reproduce Issue
```
from taipy import Gui
filename = ""
page = """
<|{filename}|file_selector|active=False|>
"""
gui = Gui(page=page)
gui.run()
```
### Solution Proposed
_No response_
### Screenshots

### Runtime Environment
_No response_
### Browsers
_No response_
### OS
_No response_
### Version of Taipy
_No response_
### Additional Context
_No response_
### Acceptance Criteria
- [x] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | 1medium
|
Title: Fix bug with importing service task when workflow extra not installed
Body: xmltodict was recently added as a dependency for ServiceTask. Given that this library is optional, it should be wrapped as a conditional import. Currently, the entire workflow package can't be imported unless xmltodict is present which violates txtai's policy to only fail when the specific library is needed. | 1medium
|
Title: Bad hyperlink on documentation
Body: NonNull reference under required field from this page https://docs.graphene-python.org/en/latest/types/scalars/ redirects to an inexistent page, namely:
`https://docs.graphene-python.org/en/latest/types/scalars/list-and-nonnull/#nonnull`
The correct link is
`https://docs.graphene-python.org/en/latest/types/list-and-nonnull/#nonnull`
| 0easy
|
Title: add method to forecasters to return the input data that is passed to the model to make predictions
Body: Currently, there is no way to see which data is being used to make predictions.
`create_X_y` creates the data that is used to train the model, but it;s not able to create data that is passed to "predict()`.
The data for the prediction is created in predict() but it is not exposed.
Could we capture the lines that create the input data to the regressor.predict() in a new method, say create_forecast_input() to expose that value to the user?
It'd help with debugging and understanding what the forecaster does under the hood | 1medium
|
Title: M.layers.annotation.polygons[0].data - IndexError: list index out of range
Body: When I run 3rd cell from
https://github.com/OpenGeoscience/geonotebook/blob/master/notebooks/04_Annotations.ipynb
d, n = next(M.layers.annotation.polygons[0].data)
I receive:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-11-66a9995560d8> in <module>()
----> 1 d, n = next(M.layers.annotation.polygons[0].data)
2 #d, n = next(M.layers.annotation.rectangles[1].data)
IndexError: list index out of range | 1medium
|
Title: [BUG] Adding Field examples breaks generated swagger docs
Body: **Describe the bug**
Using the approach described in #1115 to add a description of a parameter, the generated swagger docs break when the `example=` property is added. The generated API docs show `Could not render Parameters, see the console.`
**Versions (please complete the following information):**
- Python version: 3.11.6
- Django version: 5.1
- Django-Ninja version: 1.3.0
- Pydantic version: 2.9.2
Example:
```
from datetime import datetime, timezone
from typing import Annotated
from typing import List
from ninja import Router, Query, Schema, Field
router = Router()
class FilterParams(Schema):
end: Annotated[
datetime,
Field(
examples=[datetime.now(timezone.utc)], # fails when this line is uncommented
description="ISO-formatted timestamp of the latest item to return",
),
]
@router.get(
"",
url_name="list",
response=List[MySchemaOut],
)
def my_list(request, filters: Query[FilterParams]):
pass
``` | 1medium
|
Title: Doesn't work with Google Colab
Body: Running this small test on Google Colab
```
%%run_pytest[clean] -qq
def test_example():
assert [1, 2, 3] == [1, 2, 3]
```
results in exception:
```
/usr/local/lib/python3.6/dist-packages/pluggy/hooks.py:258: in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:67: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:61: in <lambda>
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/dist-packages/ipytest/_pytest_support.py:143: in pytest_collect_file
parent, fspath=path.new(ext=".py"), module=self.module
/usr/local/lib/python3.6/dist-packages/ipytest/_pytest_support.py:156: in from_parent
self = super().from_parent(parent, fspath=fspath)
E AttributeError: 'super' object has no attribute 'from_parent'
``` | 1medium
|
Title: I made a video demonstrating how I used it
Body: https://www.youtube.com/watch?v=HZtuHgpRoyc
"Dolly Parton, neural voice clone, tells patient's family about their medical equipment." | 3misc
|
Title: When ingest dataframe, use alternative tagging
Body: In addition to ticket #79
Would it be possible to next to, data_frame_tag_columns=tag_columns, also have a 'data_frame_tag=' argument? This way a tag can be added to a DF which doesn't appear in the DF.
For example, I have a DF with stock prices: timestamp, open, high, low, close (etc) data. I would like to be able to add tags as ticker, exchange etc which don't appear in the DF, by using a 'data_frame_tag=' argument with data_frame_tag='NASDAQ', 'AAPL' | 1medium
|
Title: How is the number of anchors calculated?
Body: ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
As an input i have image (640x640x3) and on output of the model is next: 1x25200x7, where 1 is a batch size, 7 is the number of classes+5 for bbox and objectness. But what is 25200?
Can someone please tell me a direct formula based on input size to calculate it?
### Additional
Also i found from model architecture that 25200 = CxCxB, but idk B,C. And i found that it depends on input size, for 320x320x3 it smaller but i don't remember how much. | 1medium
|
Title: Local installation fails
Body: Hi, I followed Readme instructions, but I cant get it to run either (no luck for me with Docker install either)..
I have Win11. All steps done from anaconda prompt and virtual environment. This is a workflow:
(Win11)
create app folder
(conda)
move to folder Moseca
git clone (*.git)
conda create -p D:\Audio\Moseca\moseca python=3.10.0
cd moseca
conda activate
pip install -r requirements.txt
set PYTHONPATH=D:\Audio\Moseca\moseca
curl -LJO https://huggingface.co/fabiogra/baseline_vocal_remover/resolve/main/baseline.pth
streamlit run app/header.py
I got errors at streamlit step.
(d:\Audio\Moseca\moseca) D:\Audio\Moseca\moseca>streamlit run app/header.py
Fatal Python error: init_import_site: Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
File "d:\Audio\Moseca\moseca\lib\site.py", line 617, in <module>
main()
File "d:\Audio\Moseca\moseca\lib\site.py", line 604, in main
known_paths = addsitepackages(known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 387, in addsitepackages
addsitedir(sitedir, known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 226, in addsitedir
addpackage(sitedir, name, known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 179, in addpackage
for n, line in enumerate(f):
File "d:\Audio\Moseca\moseca\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 944: character maps to <undefined>
Python --version shows 3.10.0. as it should be. Outside venv i got Python 3.11.
I'm no coder and most probably I'm doing something wrong.
Though it is not the same error as on Docker installation.
Till streamlit, installation went with no problems or error msgs. | 1medium
|
Title: Map.addLayerControl() doesn't seem to be working
Body: <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.13.1
- Python version: 3.9.12 (conda 4.12.0)
- Operating System: Windows 11
### Description
I'm new to geemap and was looking around a bit and following along the instructions on this page:
[https://geemap.org/notebooks/geemap_and_folium]
### What I Did
in cell [18]
```
Map.addLayerControl()
Map
```
No layercontrol appeared in the top-right of the map, as I was expecting (like in folium/leaflet)
In the later steps adding the various basemaps.. they couldn't be found either
seems something is broken, or I am doing something quite wrong :(
this is the final image after executing cell [21]. pretty bare :(

| 1medium
|
Title: [BUG] Paperless webserver not honoring port specified in docker-compose.yaml
Body: **Describe the bug**
Run paperless-ng via one of the docker compose scripts provided by paperless. Change the default port 8000 and url of the health check in the compose script prior to running with docker compose. Webserver container (specifically Gunicorn) does not use port specified in compose file, because the port 8000 is hardcoded in the docker/gunicorn.conf.py file ([line 1](https://github.com/jonaswinkler/paperless-ng/blob/3b17f9d6ecc6d1e3459619458ea3fefb260a116d/docker/gunicorn.conf.py#L1)).
**To Reproduce**
Steps to reproduce the behavior:
1. Download one of the sample docker compose files
2. Modify default port (8000) for the webserver to an alternative port
3. Start docker stack using the modified compose file
4. Observe that the log file for the webserver container still reports gunicorn using port 8000 and that the web interface to Paperless is not available on the specified non-default port
**Expected behavior**
Paperless should honour the webserver port specified in the compose file and make the web interface available on that port and avoid binding to the default port (8000) if a different port is specified.
**Webserver logs**
After specifying port 8001 in the compose file, the webserver container log confirms gunicorn still binds to port 8000.
```
Paperless-ng docker container starting...
Mapping UID and GID for paperless:paperless to 1027:100
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Apply database migrations...
Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, django_q, documents, paperless_mail, sessions
Running migrations:
No migrations to apply.
Executing /usr/local/bin/supervisord -c /etc/supervisord.conf
2021-08-31 12:56:21,791 INFO Set uid to user 0 succeeded
2021-08-31 12:56:21,795 INFO supervisord started with pid 1
2021-08-31 12:56:22,797 INFO spawned: 'consumer' with pid 47
2021-08-31 12:56:22,799 INFO spawned: 'gunicorn' with pid 48
2021-08-31 12:56:22,801 INFO spawned: 'scheduler' with pid 49
[2021-08-31 12:56:23 +0000] [48] [INFO] Starting gunicorn 20.1.0
[2021-08-31 12:56:23 +0000] [48] [INFO] Listening at: http://0.0.0.0:8000 (48)
[2021-08-31 12:56:23 +0000] [48] [INFO] Using worker: paperless.workers.ConfigurableWorker
[2021-08-31 12:56:23 +0000] [48] [INFO] Server is ready. Spawning workers
```
**Relevant information**
- Host OS of the machine running paperless: Synology NAS DS220+
- Browser: Same result in chrome and edge
- Version: 1.5.0
- Installation method: docker
- Changes made in `docker-compose.yml`: webserver port changed to anything other than 8000 on both the ports declaration for the webserver and for the health check test url; e.g.
```
webserver:
ports:
- 8001:8001
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001"]
``` | 1medium
|
Title: Add Resistance Bands as Equipment
Body: ... so users can add exercise variations of the ones with machines/weights
motive:
decided to go back to exercising but this time at home with Resistance Bands, some exercises don't vary much front the OG ones but there's still a difference
Extra:
Ilustrations are very important for these types of exercises because these are more "flexible", shouldn't there be an option to add them so someone that checks the exercise can know what to do instead of having to check for the proper "configuration"?
| 1medium
|
Title: Migrating to React functional components
Body: I'm interested in creating my own dash component using React and while generating a project using `dash-component-boilerplate`, the output React files are written using class components. Any plans on migrating that to functional components instead?
I've already migrated my own project files but thought to bring this topic up for discussion since the latest React docs are all written using functional components and while it is still in Beta, they do mention that these will replace the older docs.
Thanks! | 1medium
|
Title: What is the purpose of parse_obj's second argument: "update"?
Body: ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
SomeModel.parse_obj({"one": 2}, {"three": 4})
```
### Description
SQLModel's `parse_obj` supports the update argument which is not supported by pydantic's BaseModel. However, I do not see it ever used in docs, github issues, or in source code.
Could someone explain why it was added? What was the actual use case? I understand what it does, but I don't understand why it does it.
### Operating System
Linux, Windows, macOS, Other
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.11
### Additional Context
_No response_ | 1medium
|
Title: a little miss
Body: when i forget install this packet -> apispec
page will back err -> internal server error
just have not other tips, so i search the resource code,
in marshmallow_apispec.py

there have some import code, if err ,set Schema = None | 1medium
|
Title: Support for custom types in particular enums
Body: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Given below schema, where enum field type is some.CustomEnum
```
{
"title": "Test",
"description": "Test 123",
"type": "object",
"properties": {
"enum": {
"title": "Enum",
"default": "one",
"type": "some.CustomEnum"
}
}
}
```
I would like code generator to give below
```
...
class Test(BaseModel):
enum: some.CustomEnum = Field('one', title='Enum')
```
At the moment instead of some.CustomEnum it is giving Any.
Note some.CustomEnum would not type check correctly with the one generated by code gen even though they have the same values.
This is because I already have the python code for some.CustomEnum and do not need the code to be generated again.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
This is to handle cases where I already have some of the models and I just want the typing next to the field name in the code generation.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
One solution is to manually parse the output of the current code gen and remove the enum class and also update the enum type. This is tedious as one would have to define patterns for when class starts and ends.
Another solution is to have a new section in the schema that has information about these fields and somehow add it to the generated class but then the Field information would not be available.
I actually thought enum_field_as_literal="all" flag would convert all enums to literal[...], which would help but it didn't seem to do anything.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.