date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/18
| 1,040 | 3,248 |
<issue_start>username_0: I have following table in PostgreSQL and need to get all ancestors of person with given ID.
It is also necessary to be able to distinguish father and mother from the results.
```
Person table - has about 1M rows, schema looks like this:
+-----+--------+--------+
| id | father | mother |
+-----+--------+--------+
| 1 | 2 | 3 |
| 2 | 4 | 5 |
| 3 | 6 | 7 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
| ... | ... | ... |
| ... | ... | ... |
+-----+--------+--------+
```
Currently I am doing queries in a loop, getting single row per each person.
Is it possible to get all ancestors in single query (or 2 queries)?
Example result for querying id 2:
```
+----+--------+--------+
| id | father | mother |
+----+--------+--------+
| 2 | 4 | 5 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
+----+--------+--------+
```<issue_comment>username_1: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
For this purpose, it is easiest to unpivot your table with one column per parent. Then use a recursive CTE to get all parents.
The code looks like this:
```
with recursive t as (
select 1 as id, 2 as father, 3 as mother union all
select 2, 4, 5 union all
select 3, 6, 7 union all
select 4, 8, 9 union all
select 5, 10, 11
),
parents as (
select id, father as parent from t union all
select id, mother from t
),
cte as (
select p.id, p.parent
from parents p
where id = 2 -- or whatever id you want
union all
select cte.id, p.parent
from cte join
parents p
on cte.parent = p.id
)
select *
from cte;
```
[Here](http://www.sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/21156) is a SQL Fiddle.
Upvotes: 1 <issue_comment>username_2: If I got it correctly,
you could just use `JOIN` or a simple `IN()`
For the `IN` it would be something like :
```
SELECT
p.id,
p.father,
p.mother
FROM person p
WHERE
p.id = 2
OR p.id IN (SELECT father FROM person WHERE id = 2)
OR p.id IN (SELECT mother FROM person WHERE id = 2)
```
this would give you :
```
| id | father | mother |
|----|--------|--------|
| 2 | 4 | 5 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
```
For the `JOIN`you could do self-join on columns to pair it with id. just like this :
```
SELECT
x.id as id,
x.father as father,
x.mother as mother
FROM person p
LEFT JOIN person x ON x.id = p.id OR x.id = p.father OR x.id = p.mother
WHERE
p.id = 2
```
this will also gives you the same results.
From there, you can use more conditions under WHERE clause to have a proper results.
[Fiddle Demo](http://sqlfiddle.com/#!15/50c45/79)
Upvotes: 0 <issue_comment>username_3: ```
WITH recursive ParentOf (id, father, mother )
AS
(
-- Anchor query
SELECT id, father, mother
FROM test
WHERE id = ? -- e.g. 2
UNION ALL
-- Recursive query
SELECT t.id, t.father, t.mother
FROM test t
INNER JOIN ParentOf
ON t.id = ParentOf.father OR t.id = ParentOf.mother
)
-- Statement that executes the CTE
SELECT id, father, mother
FROM ParentOf;
```
Upvotes: 4 [selected_answer]
|
2018/03/18
| 966 | 3,107 |
<issue_start>username_0: I have a ListView with few image items. I want to get the item path (image) when its is clicked in a string and get it into a database as VALUES. Thank you.
XML
```
```
Button Function
```
private void btnRegSucces_Click(object sender, RoutedEventArgs e)
{
try
{
con.Open();
String query = "INSERT into jucatori(utilizator,password) VALUES('" + tbxUtilizator.Text + "','" + tbxParola.Password + "')";
SqlDataAdapter sda = new SqlDataAdapter(query, con);
sda.SelectCommand.ExecuteNonQuery();
con.Close();
//String geo = LvImagini.SelectedItem.ToString();
}
catch
{
LabelError.Content = "E R O A R E!";
}
}
```<issue_comment>username_1: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
For this purpose, it is easiest to unpivot your table with one column per parent. Then use a recursive CTE to get all parents.
The code looks like this:
```
with recursive t as (
select 1 as id, 2 as father, 3 as mother union all
select 2, 4, 5 union all
select 3, 6, 7 union all
select 4, 8, 9 union all
select 5, 10, 11
),
parents as (
select id, father as parent from t union all
select id, mother from t
),
cte as (
select p.id, p.parent
from parents p
where id = 2 -- or whatever id you want
union all
select cte.id, p.parent
from cte join
parents p
on cte.parent = p.id
)
select *
from cte;
```
[Here](http://www.sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/21156) is a SQL Fiddle.
Upvotes: 1 <issue_comment>username_2: If I got it correctly,
you could just use `JOIN` or a simple `IN()`
For the `IN` it would be something like :
```
SELECT
p.id,
p.father,
p.mother
FROM person p
WHERE
p.id = 2
OR p.id IN (SELECT father FROM person WHERE id = 2)
OR p.id IN (SELECT mother FROM person WHERE id = 2)
```
this would give you :
```
| id | father | mother |
|----|--------|--------|
| 2 | 4 | 5 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
```
For the `JOIN`you could do self-join on columns to pair it with id. just like this :
```
SELECT
x.id as id,
x.father as father,
x.mother as mother
FROM person p
LEFT JOIN person x ON x.id = p.id OR x.id = p.father OR x.id = p.mother
WHERE
p.id = 2
```
this will also gives you the same results.
From there, you can use more conditions under WHERE clause to have a proper results.
[Fiddle Demo](http://sqlfiddle.com/#!15/50c45/79)
Upvotes: 0 <issue_comment>username_3: ```
WITH recursive ParentOf (id, father, mother )
AS
(
-- Anchor query
SELECT id, father, mother
FROM test
WHERE id = ? -- e.g. 2
UNION ALL
-- Recursive query
SELECT t.id, t.father, t.mother
FROM test t
INNER JOIN ParentOf
ON t.id = ParentOf.father OR t.id = ParentOf.mother
)
-- Statement that executes the CTE
SELECT id, father, mother
FROM ParentOf;
```
Upvotes: 4 [selected_answer]
|
2018/03/18
| 1,001 | 3,355 |
<issue_start>username_0: I have an input form that asks a user to enter their name. I have an event listener that listens if the form has been submitted. I am able to access the input the user enters from within a function but after many attempts I have been unable to get the user input to be stored in a global variable.
My most recent attempt involves using JS to create a p tag and then inserting the user input between the tags in the hope that I can later access the information between the p tags. This is not working either.
I do not understand why I am able to get content from various elements but am struggling so much to access and store user input. I would really appreciate any help.
Below is my most recent attempt:
```
Submit
myForm.addEventListener("submit", storeName);
function storeName() {
var temp = document.forms["myForm"].querySelector('input[type = "text"]').value;
var para = document.createElement("p");
para.textContent = temp;
document.getElementById("fdiv").appendChild(para);
}
```<issue_comment>username_1: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
For this purpose, it is easiest to unpivot your table with one column per parent. Then use a recursive CTE to get all parents.
The code looks like this:
```
with recursive t as (
select 1 as id, 2 as father, 3 as mother union all
select 2, 4, 5 union all
select 3, 6, 7 union all
select 4, 8, 9 union all
select 5, 10, 11
),
parents as (
select id, father as parent from t union all
select id, mother from t
),
cte as (
select p.id, p.parent
from parents p
where id = 2 -- or whatever id you want
union all
select cte.id, p.parent
from cte join
parents p
on cte.parent = p.id
)
select *
from cte;
```
[Here](http://www.sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/21156) is a SQL Fiddle.
Upvotes: 1 <issue_comment>username_2: If I got it correctly,
you could just use `JOIN` or a simple `IN()`
For the `IN` it would be something like :
```
SELECT
p.id,
p.father,
p.mother
FROM person p
WHERE
p.id = 2
OR p.id IN (SELECT father FROM person WHERE id = 2)
OR p.id IN (SELECT mother FROM person WHERE id = 2)
```
this would give you :
```
| id | father | mother |
|----|--------|--------|
| 2 | 4 | 5 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
```
For the `JOIN`you could do self-join on columns to pair it with id. just like this :
```
SELECT
x.id as id,
x.father as father,
x.mother as mother
FROM person p
LEFT JOIN person x ON x.id = p.id OR x.id = p.father OR x.id = p.mother
WHERE
p.id = 2
```
this will also gives you the same results.
From there, you can use more conditions under WHERE clause to have a proper results.
[Fiddle Demo](http://sqlfiddle.com/#!15/50c45/79)
Upvotes: 0 <issue_comment>username_3: ```
WITH recursive ParentOf (id, father, mother )
AS
(
-- Anchor query
SELECT id, father, mother
FROM test
WHERE id = ? -- e.g. 2
UNION ALL
-- Recursive query
SELECT t.id, t.father, t.mother
FROM test t
INNER JOIN ParentOf
ON t.id = ParentOf.father OR t.id = ParentOf.mother
)
-- Statement that executes the CTE
SELECT id, father, mother
FROM ParentOf;
```
Upvotes: 4 [selected_answer]
|
2018/03/18
| 917 | 2,920 |
<issue_start>username_0: I'm trying to build a scoreboard in HTML and JS, and I'm wondering is there anyway I can add 2 or 3 event listener to one function for different button?
My first thought is using switch statement like:
```js
document.getElementById("1").addEventListener("click", score);
document.getElementById("2").addEventListener("click", score);
document.getElementById("3").addEventListener("click", score);
function score() {
switch(button.id) {
case "1":
...
case "2":
...
...
}
}
```
but it doesn't work, can someone help me ?
thanks.<issue_comment>username_1: THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
For this purpose, it is easiest to unpivot your table with one column per parent. Then use a recursive CTE to get all parents.
The code looks like this:
```
with recursive t as (
select 1 as id, 2 as father, 3 as mother union all
select 2, 4, 5 union all
select 3, 6, 7 union all
select 4, 8, 9 union all
select 5, 10, 11
),
parents as (
select id, father as parent from t union all
select id, mother from t
),
cte as (
select p.id, p.parent
from parents p
where id = 2 -- or whatever id you want
union all
select cte.id, p.parent
from cte join
parents p
on cte.parent = p.id
)
select *
from cte;
```
[Here](http://www.sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/21156) is a SQL Fiddle.
Upvotes: 1 <issue_comment>username_2: If I got it correctly,
you could just use `JOIN` or a simple `IN()`
For the `IN` it would be something like :
```
SELECT
p.id,
p.father,
p.mother
FROM person p
WHERE
p.id = 2
OR p.id IN (SELECT father FROM person WHERE id = 2)
OR p.id IN (SELECT mother FROM person WHERE id = 2)
```
this would give you :
```
| id | father | mother |
|----|--------|--------|
| 2 | 4 | 5 |
| 4 | 8 | 9 |
| 5 | 10 | 11 |
```
For the `JOIN`you could do self-join on columns to pair it with id. just like this :
```
SELECT
x.id as id,
x.father as father,
x.mother as mother
FROM person p
LEFT JOIN person x ON x.id = p.id OR x.id = p.father OR x.id = p.mother
WHERE
p.id = 2
```
this will also gives you the same results.
From there, you can use more conditions under WHERE clause to have a proper results.
[Fiddle Demo](http://sqlfiddle.com/#!15/50c45/79)
Upvotes: 0 <issue_comment>username_3: ```
WITH recursive ParentOf (id, father, mother )
AS
(
-- Anchor query
SELECT id, father, mother
FROM test
WHERE id = ? -- e.g. 2
UNION ALL
-- Recursive query
SELECT t.id, t.father, t.mother
FROM test t
INNER JOIN ParentOf
ON t.id = ParentOf.father OR t.id = ParentOf.mother
)
-- Statement that executes the CTE
SELECT id, father, mother
FROM ParentOf;
```
Upvotes: 4 [selected_answer]
|
2018/03/18
| 904 | 3,769 |
<issue_start>username_0: I have three tabs on activity: "camera", "gallery", "added in posts"
and I would to make camera fragment whole screen. Screenshots shows everything what i would to do.
It is my app:
[screenshot](https://i.stack.imgur.com/0XQ8t.jpg)
and i would to make it like whatsapp (actionbar is hiding):
[whatsapp screenshot](https://i.stack.imgur.com/fTZO9.jpg)
, or Facebook (if like whatsapp is too hard): [facebook screenshot](https://i.stack.imgur.com/HAy5l.jpg)
GalleryActivity code:
```
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_gallery);
galleryToolbar = findViewById(R.id.galleryToolbar);
viewPagerGallery = findViewById(R.id.viewPagerGallery);
galleryTabLayout = findViewById(R.id.galleryTabLayout);
galleryFragment = new RecyclerGalleryFragment();
addedPhotosFragment = new AddedPhotosFragment();
setSupportActionBar(galleryToolbar);
galleryTabLayout.addTab(galleryTabLayout.newTab().setText("Camera"));
galleryTabLayout.addTab(galleryTabLayout.newTab().setText("Gallery"));
galleryTabLayout.addTab(galleryTabLayout.newTab().setText("Added in posts"));
galleryTabLayout.setTabGravity(TabLayout.MODE_FIXED);
tabsGalleryAdapter = new TabsGalleryAdapter(getSupportFragmentManager(),
galleryTabLayout.getTabCount());
viewPagerGallery.setAdapter(tabsGalleryAdapter);
viewPagerGallery.setCurrentItem(1);
viewPagerGallery.addOnPageChangeListener(new TabLayout.TabLayoutOnPageChangeListener(galleryTabLayout));
galleryTabLayout.addOnTabSelectedListener(new TabLayout.OnTabSelectedListener() {
@Override
public void onTabSelected(TabLayout.Tab tab) {
viewPagerGallery.setCurrentItem(tab.getPosition());
}
@Override
public void onTabUnselected(TabLayout.Tab tab) {
}
@Override
public void onTabReselected(TabLayout.Tab tab) {
}
});
}
```
TabsGalleryAdapter code:
```
public class TabsGalleryAdapter extends FragmentPagerAdapter {
int mNoOfTabs;
public TabsGalleryAdapter(FragmentManager fm) {
super(fm);
}
public TabsGalleryAdapter(FragmentManager fm, int numberOfTabs) {
super(fm);
mNoOfTabs = numberOfTabs;
}
@Override
public Fragment getItem(int position) {
switch (position) {
case 0:
return new CameraGalleryFragment();
case 1:
return new RecyclerGalleryFragment();
case 2:
return new AddedPhotosFragment();
}
return null;
}
@Override
public int getCount() {
return mNoOfTabs;
}
}
```
GalleryActivity layout:
```
xml version="1.0" encoding="utf-8"?
```
and CameraFragment layout:
```
```
Maybe someone have some idea? Please feedback, have a nice day<issue_comment>username_1: You can use a Callback in the activity, whenever you navigate to that fragment, let Camera Fragment in your case. In the callback method, set `tabLayout.setVisibility(View.GONE)` and similarly when for some other fragment position value `tabLayout.setVisibility(View.VISIBLE)`.
Upvotes: 1 <issue_comment>username_2: I solved this by translating the `AppbarLayout` using its translationY attribute, in `onPageScrolled()` callback of the `ViewPager`'s `OnPageChangeListener` using the `AppbarLayout`'s bottom value.
refer to this answer <https://stackoverflow.com/a/54286160/8114428>
or visit this repository for more help <https://github.com/goody-h/ResidingTab>
Upvotes: 0
|
2018/03/18
| 1,101 | 3,773 |
<issue_start>username_0: I'm trying to block access to my PostgreSQL and allow access only to Localhost and my machine external IP, something like: "172.211.xx.xx". This IP is provided by my ISP (Internet Service Provider).
In `postgresql.conf` I set the following line:
```
listen_addresses = '179.211.xx.xx'
```
But I can't connect to the database from my machine. I get "Server don't listen". If I change to:
```
listen_addresses = '*'
```
everything works, but I can't do it. I need to enable access only to this IP. This is a security requirement of my project.
MY `pg_hba.conf`:
```
host all all 0.0.0.0/0 md5
```<issue_comment>username_1: The parameter `listen_addresses` at `postgresql.conf` sort of controls which ip addresses the server will answer on, not which ones the server will permit connections to authenticate. In my eyes, it's alright to set the `listen_addresses` to `*` and constrain the rest in the `pg_hba.conf`. In other words: doing the fine tuning at the `pg_hba.conf` is just fine.
So ..
```
listen_addresses = '*'
```
.. and ..
```
host all all 172.16.31.10/24
```
.. should do. Which means that all users have access to all databases from this IP range `172.16.31.10 - 192.168.127.12`
You can go further limiting access for specific users to certain databases:
```
host my_db my_user 172.16.31.10/24
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to [username_1' answer](https://stackoverflow.com/a/49349393/111036), note that `listen_addresses` [can also take a list](https://www.postgresql.org/docs/12/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS) of IP addresses and/or host names.
If you have several interfaces and/or several IP adresses, and don't want Postgres to listen on all of them (for example to only listen on a LAN interface, but not on the WAN interface), you can use something like this in `postgresql.conf`:
```
listen_addresses = 127.0.0.1,192.168.1.2,192.168.1.3,my_server.example.lan
```
Then, you still want to also configure `pg_hba.conf` and/or your firewall for control of the clients.
Upvotes: 1 <issue_comment>username_3: Had the same, `listen_addresses = '*'` was working but a particular IP was rejected. My mistake was that IP to use should be **NOT** the IP of the remote server trying to connect to PostgreSQL and not a public IP of PostgreSQL server but the IP of a network interface (duh). For example, on a PostgreSQL server run `ip a` or `ifconfig`
```
ip a
....
inet XX.X.X.XXX ....
```
then in `postgresql.conf` use returned IP
`listen_addresses = 'XX.X.X.XXX,localhost'`
plus in `pg_hba.conf` IP of a remote server trying to connect to PostgreSQL. Let's say IP of a remote server trying to connect is YY.Y.YYY.Y
`host my_db my_psql_user YY.Y.YYY.Y/32 md5`
Upvotes: 2 <issue_comment>username_4: ### `listen_addresses` are addresses local to the PostgreSQL server
The name of the `listen_addresses` setting in `postgresql.conf` may be misleading: It does **not** correspond to the remote addresses of client applications attempting to connect to the PostgreSQL server. Instead, you can think of the `listen_addresses` setting as the addresses that will be considered from the **local network interfaces** on the host running the PostgreSQL server for listening to incoming client requests.
If no `listen_addresses` setting is provided (e.g., commented out), it defaults to the loopback interface (i.e., `localhost`). Therefore, remote TCP connections to the PostgreSQL server won't be possible; only local connections through the loopback interface.
Note that you can set up filtering based on the remote addresses of the client applications in the `pg_hba.conf` file.
Upvotes: 0
|
2018/03/18
| 462 | 1,575 |
<issue_start>username_0: I'm programming against a web api that, for some of the fields, only returns them if you specify them in the query.
```
{a:"hello", b:"world", c:"of goo", d:"i am always present"}
```
and I called
```
api.getResponse(["a","b"])
```
I would get
```
{a:"hello", b:"world", d:"i am always present"}
```
in response.
How do i express that in the type language of typescript?<issue_comment>username_1: You can declare the always present and optional fields on separate interfaces then use `Pick` to choose the correct ones:
```
interface AlwaysPresentFields {
d: string;
}
interface OptionalFields {
a: string;
b: string;
c: string;
}
type ApiResponse = AlwaysPresentFields & Pick;
function getResponse(fields: F[]): ApiResponse {
// Implementation
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In Advanced typesscript we can use Partial, Pick, Readonly.
As in your case, ***keyof , Partial*** can be used. The same can be implemented in using **Pick** too: [docs](https://www.typescriptlang.org/docs/handbook/advanced-types.html)
```
interface IResponse {
a: string;
b: string;
c: string;
d: string;
}
const getResponse = (responseKeys: Array): Partial => {
const responseWithFilteredKeys: Partial = {};
const response: IResponse = {
a: 'a',
b: 'a',
c: 'a',
d: 'a',
};
responseKeys.forEach((key: keyof IResponse) => {
responseWithFilteredKeys[key] = response[key];
});
return responseWithFilteredKeys;
};
const filterdUserResponse: Partial = getResponse(['a', 'b']);
```
Upvotes: 0
|
2018/03/18
| 706 | 1,848 |
<issue_start>username_0: When I try to slice a Numpy array (3d), something unexpected occurs.
```
import numpy as np
x=np.array(
[ [[1., 2., 3., 4., 5.],
[6., 7., 8., 9., 0.],
[1., 2., 3., 4., 5.],
[6., 7., 8., 9., 0.]],
[ [11., 12., 13., 14., 15.],
[16., 17., 18., 19., 10.],
[11., 12., 13., 14., 15.],
[16., 17., 18., 19., 10.]],
[ [21., 22., 23., 24., 25.],
[26., 27., 28., 29., 20.],
[21., 22., 23., 24., 25.],
[26., 27., 28., 29., 20.]]]
)
print(x.shape) #(3,4,5)
print(x[:,0,[0,1,2,3,4]].shape) #(3,5) as expected
print(x[0,:,[0,1,2,3,4]].shape) #(5,4) why not (4,5)?
```
The latest one swap the dimension unexpectedly. Why?<issue_comment>username_1: You can declare the always present and optional fields on separate interfaces then use `Pick` to choose the correct ones:
```
interface AlwaysPresentFields {
d: string;
}
interface OptionalFields {
a: string;
b: string;
c: string;
}
type ApiResponse = AlwaysPresentFields & Pick;
function getResponse(fields: F[]): ApiResponse {
// Implementation
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In Advanced typesscript we can use Partial, Pick, Readonly.
As in your case, ***keyof , Partial*** can be used. The same can be implemented in using **Pick** too: [docs](https://www.typescriptlang.org/docs/handbook/advanced-types.html)
```
interface IResponse {
a: string;
b: string;
c: string;
d: string;
}
const getResponse = (responseKeys: Array): Partial => {
const responseWithFilteredKeys: Partial = {};
const response: IResponse = {
a: 'a',
b: 'a',
c: 'a',
d: 'a',
};
responseKeys.forEach((key: keyof IResponse) => {
responseWithFilteredKeys[key] = response[key];
});
return responseWithFilteredKeys;
};
const filterdUserResponse: Partial = getResponse(['a', 'b']);
```
Upvotes: 0
|
2018/03/18
| 1,308 | 4,339 |
<issue_start>username_0: I have a float32Array from decodeAudioData method, which I want to convert it to Uint8Array while **preserving the float32 IEEE 754 audio data**.
So far I tried,
```
var length = float32Array.length;
var emptyBuffer = new ArrayBuffer(length * 4);
var view = new DataView(emptyBuffer);
for (var i = 0; i < length; i++) {
view.setFloat32(i * 4, float32Array[i], true);
}
var outputArray = new Uint8Array(length);
for (var j = 0; j < length; j++) {
outputArray[j] = view.getUint8(j * 4);
}
return outputArray;
```
**Edit:**
I just need to hold on to the binary representation just as in this [answer.](https://stackoverflow.com/a/29342909/4130569)<issue_comment>username_1: It's not very clear what you're asking; or, rather, what it *appears* you're asking is a thing that makes no sense.
A `Float32Array` instance is a view onto a buffer of "raw" byte values, like all typed arrays. Each element of the array represents 4 of those raw bytes. Extracting a value via a simple array lookup:
```
var n = float32array[1];
```
implicitly interprets those 4 bytes as an IEEE 32-bit floating point value, and then that value is converted to a standard JavaScript number. JavaScript numbers are always 64-bit IEEE floating point values.
Similarly, a `Uint8Array` is a view onto a buffer, and each element gives the unsigned integer value of one byte. That is,
```
var n = uint8array[1];
```
accesses that element, interprets it as an unsigned one-byte integer, and converts that to a JavaScript number.
So: if you want to examine a list of 32-bit floating point values as the raw integer value of each byte, you can create a `Uint8Array` that "shares" the same buffer as the `Float32Array`:
```
var uintShared = new Uint8Array(float32array.buffer);
```
The number values you'd see from looking at the `Uint8Array` values will not appear to have anything to do with the number values you get from looking at the `Float32Array` elements, which is to be expected.
On the other hand, if you want to create a new `Uint8Array` to hold the apparent *values* from the `Float32Array`, you can just create a new array of the same length and copy each value:
```
var uintCopy = new Uint8Array(float32array.length);
for (let i = 0; i < float32array.length; ++i)
uintCopy[i] = float32array[i]; // deeply problematic; see below
```
Now that won't work too well, in general, because the numeric range of values in a `Float32Array` is vastly greater than that of values in the `Uint8Array`. For one thing, the 32-bit floating point values can be negative. What's more, even if you know that the floating point values are all integers in the range 0 to 255, you definitely will not get the same bit patterns in the `Uint8Array`, for the simple reason that a 32-bit floating point number is just not the same as an 8-bit unsigned integer. To "preserve the IEEE-754 representation" makes no sense.
So that's the reality of the situation. If you were to explain *why* you think you want to somehow cram all 32 bits of a 32-bit IEEE float into an 8-bit unsigned integer, it would be possible to provide a more directly helpful answer.
Upvotes: 2 <issue_comment>username_2: ```
var output = new Uint8Array(float32Array.length);
for (var i = 0; i < float32Array.length; i++) {
var tmp = Math.max(-1, Math.min(1, float32Array[i]));
tmp = tmp < 0 ? (tmp * 0x8000) : (tmp * 0x7FFF);
tmp = tmp / 256;
output[i] = tmp + 128;
}
return output;
```
Anyone got doubt in mind can test this easily with [Audacity](https://www.audacityteam.org/)'s *Import Raw Data* feature.
1. Download the [sample raw data](https://www.dropbox.com/s/okzlc6ljflmae3p/rawportal?dl=0) which I decoded from a video file using Web Audio Api's decodeAudioData method.
2. Convert the Float32Array that *sample raw data* is filled with to Uint8Array by using the method above (or use your own method e.g. `new Uint8Array(float32Array.buffer)` to hear the corrupted sizzling sound) and download the uint8 pcm file.
`forceDownload(new Blob([output], {
type: 'application/octet-binary'
}));`
3. Encode your downloaded data in Audacity using *File-> Import -> Raw Data...* Encoding should be set to *Unsigned 8-bit PCM* and Sample Rate should be 16000 Hz since the original decoded audio file was in 16000 Hz.
Upvotes: 3 [selected_answer]
|
2018/03/18
| 1,172 | 3,427 |
<issue_start>username_0: I have two lists:
```
a = ['A', 'B', 'C', 'D']
b = ['B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
```
I would like to check list A against list B, so that if an item in list A exists in list B, I get a True result, and if not, False. i.e.
```
c = [False, True, True, True]
```
I've tried the following:
```
c = [False for i in range(len(a))]
for i in a:
for j in b:
if a[i] == b[j]:
c[i] = True
```
I get the error:
TypeError: list indices must be integers, not str
I've also tried:
c = [True for i in a if i in b]
which returns:
c = [True, True]
I would like the correct answer to be
c = [False, True, True]<issue_comment>username_1: You could do a `list comprehension` in a more readable way:
```
a = ['A', 'B', 'C', 'D']
b = ['B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
c = [True if element in b else False for element in a]
print(c)
# [False, True, True, True]
```
Upvotes: 2 <issue_comment>username_2: ```
a = ['A', 'B', 'C', 'D']
b = ['B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
c = [x in b for x in a]
```
Upvotes: 2 <issue_comment>username_3: Sets are handy here <https://docs.python.org/3.6/library/stdtypes.html#set-types-set-frozenset>:
```
# gives you a set of elements that are common for both lists.
intersect = set(a) & set(b)
c = [x in intersect for x in a]
```
Upvotes: 1 <issue_comment>username_4: The Pythonic way to do this is with `[x in b for x in a]`, as suggested by @Dan and @username_2.
The [`in`](https://docs.python.org/3/reference/expressions.html#in) operator checks for membership, and is generally what you want to use if you want to check if a collection has a certain value. It will give you a boolean value.
```
l = ['A', 'B', 'C']
print('A' in l) # True
print('Z' in l) # False
```
If you then combine this with a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions), you get a very concise way to express what you want.
```
a = ['A', 'B', 'C', 'D']
b = ['B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
c = [x in b for x in a]
print(c) # [False, True, True, True]
```
Large lists
===========
If the list you are checking is small, then the above code works very well. However, once you start to check lists with thousands or millions of elements, then you will notice that the above code will start to get quite slow. That is because Python has to look at each element in turn to find out whether the element is in the list or not. So for a list with a million elements in, Python might have to do a million comparison operations.
To speed things up, you can use a [set](https://docs.python.org/3/tutorial/datastructures.html#sets). Sets use [hash tables](https://en.wikipedia.org/wiki/Hash_table) under the hood, and the great thing about hash tables is that lookups take roughly the same amount of time no matter how many elements there are in them. Even with millions of elements, they are super quick. You would use them like this:
```
a = ['A', 'B', 'C', 'D']
b = ['B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']
b_set = set(b)
c = [x in b_set for x in a]
```
Better still, if `b` doesn't have to be a list, you can just define it as a set. This saves you the extra step of converting the list into a set, which will be slow if you have a large number of elements.
```
a = ['A', 'B', 'C', 'D']
b = {'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'}
c = [x in b for x in a]
```
Upvotes: 1
|
2018/03/18
| 349 | 1,239 |
<issue_start>username_0: I need to use method `setStartTimestamp(OffsetDateTime startTimestamp)` but my minSdk is 21 so it shows me `Call requires API level 26`. Is there any way how can I use OffsetDateTime on lower apis?<issue_comment>username_1: `java.time` APIs on Android require API 26.
For older API levels you can use [ThreeTenABP](https://github.com/JakeWharton/ThreeTenABP) which is the Android version of JSR-310 `java.time` backport for Java 6.
Upvotes: 4 <issue_comment>username_2: You can use **Java 8+ API desugaring support (Android Gradle Plugin 4.0.0+)**
It is described in [this link](https://developer.android.com/studio/write/java8-support)
Simply you have to modify your build.gradle file in your app module like this:
```
android {
defaultConfig {
// Required when setting minSdkVersion to 20 or lower
multiDexEnabled true
}
compileOptions {
// Flag to enable support for the new language APIs
coreLibraryDesugaringEnabled true
// Sets Java compatibility to Java 8
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
coreLibraryDesugaring 'com.android.tools:desugar_jdk_libs:1.0.9'
}
```
Upvotes: 5 [selected_answer]
|
2018/03/18
| 426 | 1,493 |
<issue_start>username_0: I want to do this action:
[](https://i.stack.imgur.com/UEo2a.png)
using Heroku CLI.
If I have the remote git on my computer I can do `git push my-heroku-remote master`
But because my heroku app is already connected to the git project, I find this approach redundant.
Any ideas?<issue_comment>username_1: I prefer the GitHub auto-deployment over the `git push heroku master` because it works great with a GitHub Flow (Pull Requests to master). But if you would rather do manual deployments to Heroku from your dev machine you can fetch and push:
```
git fetch origin master:master
git push heroku master:master
```
(Assuming the GitHub remote is named `origin` and the Heroku remote is named `heroku`.)
Upvotes: 2 <issue_comment>username_2: Taken from [here](https://help.heroku.com/I3E6QPQN/how-do-i-force-a-new-deploy-without-adding-a-commit-to-my-github-repo):
This can be done with the [builds-create](https://devcenter.heroku.com/articles/platform-api-reference#build-create) Platform API endpoint.
There is a [Builds CLI plugin](https://github.com/heroku/heroku-builds) that makes this easier:
```sh
heroku builds:create --source-url https://user:token@api.github.com/repos///tarball/master/ --app
```
Where user is your Github username and token is a Github personal access token ([reference](https://developer.github.com/v3/auth/#via-oauth-and-personal-access-tokens))
Upvotes: 0
|
2018/03/18
| 542 | 1,941 |
<issue_start>username_0: Sorry if this has been asked before, I have been looking for an answer for hours without success.
So, I have a number (it's actually a class object, but for example's sake it may be better to explain using integers): 1, and a list of unique numbers (my number not included).
Example: [2, 5, 3, 8, 9, 4...]
What I want to do is compare my number (1) with all numbers from the list, but two items/pair at a time, so the first ones here would be (2, 5). Then, with these three numbers (1, 2, 5) I can check if they meet a condition from a function of mine. If not, take my number again (1) and compare it with the next two items (3, 8) from the list and so on until the condition for all three numbers is met (or not).
Can you guys please help me out on how to achieve this? Thanks in advance.<issue_comment>username_1: I prefer the GitHub auto-deployment over the `git push heroku master` because it works great with a GitHub Flow (Pull Requests to master). But if you would rather do manual deployments to Heroku from your dev machine you can fetch and push:
```
git fetch origin master:master
git push heroku master:master
```
(Assuming the GitHub remote is named `origin` and the Heroku remote is named `heroku`.)
Upvotes: 2 <issue_comment>username_2: Taken from [here](https://help.heroku.com/I3E6QPQN/how-do-i-force-a-new-deploy-without-adding-a-commit-to-my-github-repo):
This can be done with the [builds-create](https://devcenter.heroku.com/articles/platform-api-reference#build-create) Platform API endpoint.
There is a [Builds CLI plugin](https://github.com/heroku/heroku-builds) that makes this easier:
```sh
heroku builds:create --source-url https://user:token@api.github.com/repos///tarball/master/ --app
```
Where user is your Github username and token is a Github personal access token ([reference](https://developer.github.com/v3/auth/#via-oauth-and-personal-access-tokens))
Upvotes: 0
|
2018/03/18
| 367 | 1,309 |
<issue_start>username_0: I want to delete onAction to dissociate my view and my controller
this is the javaFx code :
```
```
I want to associated id on button with my controller
thanks<issue_comment>username_1: I prefer the GitHub auto-deployment over the `git push heroku master` because it works great with a GitHub Flow (Pull Requests to master). But if you would rather do manual deployments to Heroku from your dev machine you can fetch and push:
```
git fetch origin master:master
git push heroku master:master
```
(Assuming the GitHub remote is named `origin` and the Heroku remote is named `heroku`.)
Upvotes: 2 <issue_comment>username_2: Taken from [here](https://help.heroku.com/I3E6QPQN/how-do-i-force-a-new-deploy-without-adding-a-commit-to-my-github-repo):
This can be done with the [builds-create](https://devcenter.heroku.com/articles/platform-api-reference#build-create) Platform API endpoint.
There is a [Builds CLI plugin](https://github.com/heroku/heroku-builds) that makes this easier:
```sh
heroku builds:create --source-url https://user:token@api.github.com/repos///tarball/master/ --app
```
Where user is your Github username and token is a Github personal access token ([reference](https://developer.github.com/v3/auth/#via-oauth-and-personal-access-tokens))
Upvotes: 0
|
2018/03/18
| 466 | 1,537 |
<issue_start>username_0: ```
function truncate(str, num) {
if (str.length >= num) {
if (num > 3) {
str.substr(0, num - 3) + '...';
} else {
str.substr(0, num) + '...';
}
} return str;
}
console.log(truncate("A-tisket a-tasket A green and yellow basket", 11));
```
It returned me the original string A-tisket a-tasket A green and yellow basket. I wonder waht's the problem here.<issue_comment>username_1: This is because the return `str` is pointing the `str` in the function argument.Create a separate variable and assign the substring to it and return the value
```js
function truncate(str, num) {
let subStr = '';
if (str.length >= num) {
if (num > 3) {
subStr = str.substr(0, num - 3) + '...';
} else {
subStr = str.substr(0, num) + '...';
}
}
return subStr;
}
console.log(truncate("A-tisket a-tasket A green and yellow basket", 11));
```
Upvotes: 0 <issue_comment>username_2: substr is an immutable operation. That means that when you execute it, it doesnt change the original value of the string that you applied it to.
In order to make it work, you should save the substring in an additional variable:
```
function truncate(str, num) {
if (str.length >= num) {
if (num > 3) {
return str.substr(0, num - 3) + '...';
} else {
return str.substr(0, num) + '...';
}
} return str;
}
console.log(truncate("A-tisket a-tasket A green and yellow basket", 11));
```
Upvotes: 1
|
2018/03/18
| 417 | 1,560 |
<issue_start>username_0: I need to implement a function in javascript that runs an async process in the next way:
1. The process will run async checking if a variable is set to true
2. When true, the async process will stop
3. When the variables is set to false, the async process will start again.
What is the best aproach? A global variable that is accesible for both functions (the async and the UI) or via callbacks?<issue_comment>username_1: Without using any external libs etc, a simple way would be using some form of encapsulation that returns you a stop method.
Here is a simple example.
```js
function doAsyncStuff() {
var i = setInterval(function () {
console.log("tick");
}, 1000);
return function stop() {
console.log("Stop called");
clearInterval(i);
}
}
var stop = doAsyncStuff();
document.querySelector("button").onclick = stop;
```
```html
Stop
```
You could even extend this to return multiple functions, like resume restart etc.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Something that uses events would work just fine.
```js
function main() {
main.running = true
doSomething()
}
function doSomething() {
if (!input.checked) {
console.log("checking")
window.setTimeout(doSomething,1000);
}
else {
main.running = false;
}
}
function changeHandler(event) {
if (event.target.checked == false && main.running == false) {
main();
}
}
var input = document.getElementById('check');
input.onchange=changeHandler;
main.running = false;
```
Upvotes: 1
|
2018/03/18
| 485 | 1,838 |
<issue_start>username_0: Webpack v4 has introduced two sets of defaults: `production` and `development`. My question is: Is there a way to refer to those inside a configuration file? I know I can still pass environment variables as:
```
--env.NODE_ENV=development
```
by doing this I have two independent environment variables and this doesn't feel right. Another option would be obviously to refer to a different config file and this doesn't look like an optimal solution for simple configurations as well.
Am I missing something here?<issue_comment>username_1: Found a better way in [a webpack github issue](https://github.com/webpack/webpack/issues/6460).
Since webpack 2 you can export a function in `webpack.config.js`, the parsed `argv` will be passed to that function.
For webpack 4 you can write the config like:
```
// webpack.config.js
module.exports = (env, argv) => {
console.log(argv.mode);
return { /* your config object */ };
};
// $webpack-cli --mode development
// development
```
---
Original Answer:
You can use some libs like `minimist` to parse the arguments passed by the cli:
```
// webpack.config.js
const args = require('minimist')(process.argv.slice(2));
console.log(args.mode);
// $webpack-cli --mode development
// development
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I've just built this simple NPM package for importing current Webpack 4 mode: <https://github.com/username_2/webpack-mode>. Here is how you can use it:
```
// webpack.config.js
const { isProduction } = require('webpack-mode');
console.log(isProduction); // => true
// ... rest of the config
```
It works very simply, just reads process arguments to determine the value of `mode` argument. By default `webpack-mode.isProduction` will return `true` (original behaviour of the Webpack 4).
Upvotes: 0
|
2018/03/18
| 824 | 2,719 |
<issue_start>username_0: I need to write a python code for Shuffle a pack of cards and distribute it among four players. You need to stop the game when someone has 4 hearts and say who is the winner. As im a beginner in python and finished it straight forward way. can anyone give me optimized way to code this ??. here is my try..
```
import random
colour=["Heart","Spade","Club","Diamond"]
a=[2,3,4,5,6,7,8,9,10,"J","Q","K","A"]
player1,player2,player3,player4=[],[],[],[]
shuffle=[{x:y} for x in colour for y in a ]
cnt=0
while len(shuffle)!=0:
p1,p2,p3,p4=random.sample(population=shuffle,k=4)
i1=shuffle.index(p1)
del shuffle[i1]
i2=shuffle.index(p2)
del shuffle[i2]
i3=shuffle.index(p3)
del shuffle[i3]
i4=shuffle.index(p4)
del shuffle[i4]
if "Heart" in p1:
player1.append(p1)
if len(player1)>=4:
print("player 1 is win ,heart cards are:",player1)
break
if "Heart" in p2:
player2.append(p2)
if len(player2)>=4:
print("player 2 is win,heart cards are:",player2)
break
if "Heart" in p3:
player3.append(p3)
if len(player3)>=4:
print("player 3 is win,heart cards are:",player3)
break
if "Heart" in p4:
player4.append(p4)
if len(player4)>=4:
print("player 4 is win,heart cards are:",player4)
break
```<issue_comment>username_1: Found a better way in [a webpack github issue](https://github.com/webpack/webpack/issues/6460).
Since webpack 2 you can export a function in `webpack.config.js`, the parsed `argv` will be passed to that function.
For webpack 4 you can write the config like:
```
// webpack.config.js
module.exports = (env, argv) => {
console.log(argv.mode);
return { /* your config object */ };
};
// $webpack-cli --mode development
// development
```
---
Original Answer:
You can use some libs like `minimist` to parse the arguments passed by the cli:
```
// webpack.config.js
const args = require('minimist')(process.argv.slice(2));
console.log(args.mode);
// $webpack-cli --mode development
// development
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I've just built this simple NPM package for importing current Webpack 4 mode: <https://github.com/username_2/webpack-mode>. Here is how you can use it:
```
// webpack.config.js
const { isProduction } = require('webpack-mode');
console.log(isProduction); // => true
// ... rest of the config
```
It works very simply, just reads process arguments to determine the value of `mode` argument. By default `webpack-mode.isProduction` will return `true` (original behaviour of the Webpack 4).
Upvotes: 0
|
2018/03/18
| 549 | 2,178 |
<issue_start>username_0: I have an application backed by Neo4j database written in python.
Does anyone know what is the best approach to unit test my application ?
I have found this can be easily done in Java with usage of ImpermanentGraphDatabase. Is there a similar approach in python ?
Any tips are appreciated.<issue_comment>username_1: I'm going to say the easiest way would be to mock out Neo4j entirely.
What I like to do is to simplify your entity classes to as little pure-python functionality as possible.
Even if you're using some fancy ORM to access your data - for example in Django - I still prefer to create a pure-python class to represent my model and create a minimalistic dao method, which will be mocked-out during unit tests. That way I don't have to ever touch the database in my unit tests, which I think is how unit tests should behave.
That being said, if you really want to have neo4j functionality in your tests, have a look at the [IntegrationTestCase](https://github.com/neo4j/neo4j-python-driver/blob/1.6/test/integration/tools.py) of the official neo4j-python-driver package. It looks like it is providing a base class to inherit your integration tests (because if you're pulling the DB into your tests, you're essentially doing Integration Tests) from, as it takes care of starting up and shutting down the server between test runs.
Upvotes: 3 [selected_answer]<issue_comment>username_2: More answers have been provided via Neo4j community: <https://community.neo4j.com/t/is-there-a-library-to-unit-test-graphdb-cypher-queries-using-python/8227/5>
(Easy) **Check your cypher syntax using `libcypher-parser`**
You can also put your queries into .cypher files and use a linter like libcypher-parser (<https://github.com/cleishm/libcypher-parser>) to check that your queries are valid.
(Advanced) **Spin up a docker instance**
Use a simple docker container to spin up another Neo4j instance, then you can test to your heart's content ... There you can (in this sandboxed environment):
* just run your cypher queries to see if they work
* populate/seed the sandboxed Neo4j database instance and run more substantial tests
Upvotes: 1
|
2018/03/18
| 622 | 2,434 |
<issue_start>username_0: I have to make a program where we replace any two/three words (in a string) of our choosing (e.g. replace "AWAY FROM KEYBOARD" with "AFK") without using replace, find, translate, or encode. I'm not sure exactly how this would be done. I've managed to be able to do this by replacing one word and making it shorter (I'm not allowed to post it) (i.e. changing "Hello" to "Hi"), but I can't seem to figure this one out. I've looked at similar questions but none really helped me without using stuff I'm not allowed to use or they had a different purpose.<issue_comment>username_1: I'm going to say the easiest way would be to mock out Neo4j entirely.
What I like to do is to simplify your entity classes to as little pure-python functionality as possible.
Even if you're using some fancy ORM to access your data - for example in Django - I still prefer to create a pure-python class to represent my model and create a minimalistic dao method, which will be mocked-out during unit tests. That way I don't have to ever touch the database in my unit tests, which I think is how unit tests should behave.
That being said, if you really want to have neo4j functionality in your tests, have a look at the [IntegrationTestCase](https://github.com/neo4j/neo4j-python-driver/blob/1.6/test/integration/tools.py) of the official neo4j-python-driver package. It looks like it is providing a base class to inherit your integration tests (because if you're pulling the DB into your tests, you're essentially doing Integration Tests) from, as it takes care of starting up and shutting down the server between test runs.
Upvotes: 3 [selected_answer]<issue_comment>username_2: More answers have been provided via Neo4j community: <https://community.neo4j.com/t/is-there-a-library-to-unit-test-graphdb-cypher-queries-using-python/8227/5>
(Easy) **Check your cypher syntax using `libcypher-parser`**
You can also put your queries into .cypher files and use a linter like libcypher-parser (<https://github.com/cleishm/libcypher-parser>) to check that your queries are valid.
(Advanced) **Spin up a docker instance**
Use a simple docker container to spin up another Neo4j instance, then you can test to your heart's content ... There you can (in this sandboxed environment):
* just run your cypher queries to see if they work
* populate/seed the sandboxed Neo4j database instance and run more substantial tests
Upvotes: 1
|
2018/03/18
| 1,812 | 5,633 |
<issue_start>username_0: I'm using celery (solo pool with concurrency=1) and I want to be able to shut down the worker after a particular task has run. A caveat is that I want to avoid any possibility of the worker picking up any further tasks after that one.
Here's my attempt in the outline:
```
from __future__ import absolute_import, unicode_literals
from celery import Celery
from celery.exceptions import WorkerShutdown
from celery.signals import task_postrun
app = Celery()
app.config_from_object('celeryconfig')
@app.task
def add(x, y):
return x + y
@task_postrun.connect(sender=add)
def shutdown(*args, **kwargs):
raise WorkerShutdown()
```
However, when I run the worker
```
celery -A celeryapp worker --concurrency=1 --pool=solo
```
and run the task
```
add.delay(1,4)
```
I get the following:
```
-------------- celery@sam-APOLLO-2000 v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Linux-4.4.0-116-generic-x86_64-with-Ubuntu-16.04-xenial 2018-03-18 14:08:37
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7f596896ce90
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: redis://localhost/
- *** --- * --- .> concurrency: 4 (solo)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[2018-03-18 14:08:39,892: WARNING/MainProcess] Restoring 1 unacknowledged message(s)
```
The task is re-queued and will be run again on another worker, leading to a loop.
This also happens when I move the `WorkerShutdown` exception within the task itself.
```
@app.task
def add(x, y):
print(x + y)
raise WorkerShutdown()
```
Is there a way I can shut down the worker after a particular task, while avoiding this unfortunate side-effect?<issue_comment>username_1: The recommended process for shutting down a worker is to send the `TERM` signal. This will cause a celery worker to shutdown after completing any currently running tasks. If you send a `QUIT` signal to the worker's main process, the worker will shutdown immediately.
The celery docs, however, usually discuss this in terms of managing celery from a command line or via systemd/initd, but celery additionally provides a remote worker control API via `celery.app.control`.
You can [revoke](http://docs.celeryproject.org/en/latest/reference/celery.app.control.html#celery.app.control.Control.revoke) a task to prevent workers from executing the task. This should prevent the loop you are experiencing. Further, control supports [shutdown](http://docs.celeryproject.org/en/latest/reference/celery.app.control.html#celery.app.control.Control.shutdown) of a worker in this manner as well.
So I imagine the following will get you the behavior you desire.
```
@app.task(bind=True)
def shutdown(self):
app.control.revoke(self.id) # prevent this task from being executed again
app.control.shutdown() # send shutdown signal to all workers
```
Since it's not currently possible to ack the task from within the task, then continue executing said task, this method of using `revoke` circumvents this problem so that, even if the task is queued again, the new worker will simply ignore it.
Alternatively, the following would also prevent a redelivered task from being executed a second time...
```
@app.task(bind=True)
def some_task(self):
if self.request.delivery_info['redelivered']:
raise Ignore() # ignore if this task was redelivered
print('This should only execute on first receipt of task')
```
Also worth noting [`AsyncResult`](http://docs.celeryproject.org/en/latest/reference/celery.result.html#celery.result.AsyncResult.revoke) also has a `revoke` method that calls `self.app.control.revoke` for you.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you shutdown the worker, after the task has completed, it won't re-queue again.
```
@task_postrun.connect(sender=add)
def shutdown(*args, **kwargs):
app.control.broadcast('shutdown')
```
This will gracefully shutdown the worker after tasks is completed.
```
[2018-04-01 18:44:14,627: INFO/MainProcess] Connected to redis://localhost:6379/0
[2018-04-01 18:44:14,656: INFO/MainProcess] mingle: searching for neighbors
[2018-04-01 18:44:15,719: INFO/MainProcess] mingle: all alone
[2018-04-01 18:44:15,742: INFO/MainProcess] celery@foo ready.
[2018-04-01 18:46:28,572: INFO/MainProcess] Received task: celery_worker_stop.add[ac8a65ff-5aad-41a6-a2d6-a659d021fb9b]
[2018-04-01 18:46:28,585: INFO/ForkPoolWorker-4] Task celery_worker_stop.add[ac8a65ff-5aad-41a6-a2d6-a659d021fb9b] succeeded in 0.005628278013318777s: 3
[2018-04-01 18:46:28,665: WARNING/MainProcess] Got shutdown from remote
```
Note: broadcast will shutdown all workers. If you want to shutdonw a specific worker, start worker with a name
```
celery -A celeryapp worker -n self_killing --concurrency=1 --pool=solo
```
Now you can shutdown this with destination parameter.
```
app.control.broadcast('shutdown', destination=['celery@self_killing'])
```
Upvotes: 3 <issue_comment>username_3: If you need to shutdown a specific worker and don't know it's name in advance, you can get it from the task properties. Based on the answers above, you can use:
```
app.control.shutdown(destination=[self.request.hostname])
```
or
```
app.control.broadcast('shutdown', destination=[self.request.hostname])
```
**Note:**
* A worker should be started with a name (option `'-n'`);
* The task should be defined with `bind=True` parameter.
Upvotes: 2
|
2018/03/18
| 346 | 1,125 |
<issue_start>username_0: I'm trying to add a class to
using:
```
document.getElementById("sp1").classList.add("fa fa-hand-rock-o");
```
But it is showing error:
>
> String contains an invalid character
>
>
><issue_comment>username_1: The white space between two class name is creating the issue.If need to add multiple class separate them by a comma and put each of them inside a quote
```js
document.getElementById("sp1").classList.add("fa", "fa-hand-rock-o")
```
```css
.fa {
color: red;
}
.fa-hand-rock-o {
font-size: 18px;
}
```
```html
- Test Text
```
Upvotes: 1 <issue_comment>username_2: `fa fa-hand-rock-o` can not be a single class because class names can not have space(s).
Here I assume you are trying to add two different classes. When adding multiple classes by using `classList.add()` specify all the classes as individual comma separated string like:
`.add("fa", "fa-hand-rock-o")`
**Code Example:**
```js
document.getElementById("sp1").classList.add("fa","fa-hand-rock-o");
console.log(document.getElementById("sp1"));
```
```html
Test Container
```
Upvotes: 5 [selected_answer]
|
2018/03/18
| 816 | 2,534 |
<issue_start>username_0: I want to get one boat item from the `Boat` table only if that boat is not in the `RentBoat` table and has a value on the `IsOnRent` column set to true.
This is my code so far:
```
db.Boats.SingleOrDefault(x => !db.RentBoats.Any(s => s.BoatID == x.BoatID && s.IsOnRent == true));
```
The error I am getting is:
>
> The sequence contains several elements
>
>
>
Where is my code wrong? Any help would be great.
Essentially I want what this query does but I want it to return the matched item from an Id or return null.(a singel item not a list)
```
db.Boats.Where(boat => !db.RentBoats.Any(x => x.BoatID == boat.BoatID && x.IsOnRent == true)).ToList();
public class Boat
{
[Key]
public int BoatID { get; set; }
public string BoatName { get; set; }
}
public class RentBoat
{
[Key]
[Required]
public int RentBoatID { get; set; }
[ForeignKey("Boat")]
public int BoatID { get; set; }
public string BoatName { get; set; }
public bool IsOnRent { get; set; }
public virtual Boat Boat { get; set; }
}
```
>
> EDIT
>
>
>
This is how I solved it in the end:
```
using (var db = new BoatContext())
{
List IsBoatRented = db.RentBoats.Where(x => x.BoatID == boatId && x.IsOnRent == true).ToList();
if (IsBoatRented.Count() == 0)
{
returnedBoat = db.Boats.FirstOrDefault(x => x.BoatID == boatId);
}
}
```
I know it isn't as efficient or good looking as one single query. But couldn't make this with one query. If somebody can rewrite these queries to one query. Fell free to help. I would appreciate it.<issue_comment>username_1: The white space between two class name is creating the issue.If need to add multiple class separate them by a comma and put each of them inside a quote
```js
document.getElementById("sp1").classList.add("fa", "fa-hand-rock-o")
```
```css
.fa {
color: red;
}
.fa-hand-rock-o {
font-size: 18px;
}
```
```html
- Test Text
```
Upvotes: 1 <issue_comment>username_2: `fa fa-hand-rock-o` can not be a single class because class names can not have space(s).
Here I assume you are trying to add two different classes. When adding multiple classes by using `classList.add()` specify all the classes as individual comma separated string like:
`.add("fa", "fa-hand-rock-o")`
**Code Example:**
```js
document.getElementById("sp1").classList.add("fa","fa-hand-rock-o");
console.log(document.getElementById("sp1"));
```
```html
Test Container
```
Upvotes: 5 [selected_answer]
|
2018/03/18
| 1,411 | 4,191 |
<issue_start>username_0: I am trying to do a fairly complex (for me) query that will grab a Description field from a Main Table and then append it with titles and values from related Look-Up-Tables. Not all records have records in the Look-up tables. I'll pose further questions as subsequent questions as I go along, but to start my issue is that only those records with values in all the tables show up.
<http://sqlfiddle.com/#!9/09047/13>
* (null)
* This is Record 2 Text
**Color**:
Red
**Fruit**:
Apple
* (null)
If I use Concat\_WS I get all records but my 'label' in the concat disappears:
<http://sqlfiddle.com/#!9/09047/16>
* This is Record 1 Text
Blue
* This is Record 2 Text
Red
Apple
* This is Record 3 Text
Grape
So my first step is to get all the record descriptions regardless of how many Look-up-Tables they exist in and to get the Names/Labels displaying.<issue_comment>username_1: It looks like you need `COALESCE`:
```
Select J.id,
Concat(J.Description,
COALESCE(Concat('**Color**:
',
group_concat(F.Name SEPARATOR '
')),''),
'
',
COALESCE(Concat('**Fruit**:
',
group_concat(F2.Name SEPARATOR '
')),'')
) AS output
from Main J
Left Join LUT_1 L On J.ID=L.MainID
Left Join LUT_Names_1 F On F.ID=L.LUT_NAME_ID
Left Join LUT_2 L2 On J.ID=L2.MainID
Left Join LUT_Names_2 F2 On F2.ID=L2.LUT_NAME_ID
Group by J.ID;
```
**[SQLFiddle Demo](http://sqlfiddle.com/#!9/09047/22/0)**
**EDIT:**
As always for MySQL, the query itself is basing on MySQL extension. If you set it to [`ONLY_FULL_GROUP_BY`](https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_only_full_group_by) (default for MySQL 5.7.5 and above):
```
SET sql_mode=ONLY_FULL_GROUP_BY;
-- query will return error
```
>
> J.Description' isn't in GROUP BY
>
>
>
To correct this you will need to use aggregation function on that column like: MAX:
```
SET sql_mode=ONLY_FULL_GROUP_BY;
Select J.id,
Concat(MAX(J.Description),
COALESCE(Concat('**Color**:
',
group_concat(F.Name SEPARATOR '
')),''),
'
',
COALESCE(Concat('**Fruit**:
',
group_concat(F2.Name SEPARATOR '
')),'')
)
from Main J
Left Join LUT_1 L On J.ID=L.MainID
Left Join LUT_Names_1 F On F.ID=L.LUT_NAME_ID
Left Join LUT_2 L2 On J.ID=L2.MainID
Left Join LUT_Names_2 F2 On F2.ID=L2.LUT_NAME_ID
Group by J.ID;
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: I think the `concat_ws()` may be throwing off what you want to do.
The following produces the two labels, even when there are no values:
```
Select J.id,
Concat(J.Description,
'
',
'**Color**:
',
coalesce(group_concat(F.Name SEPARATOR '
'), ''),
'
',
'**Fruit**:
',
coalesce(group_concat(F2.Name SEPARATOR '
'), '')
)
from Main J Left Join
LUT_1 L
On J.ID = L.MainID Left Join
LUT_Names_1 F
On F.ID = L.LUT_NAME_ID Left Join
LUT_2 L2
On J.ID = L2.MainID Left Join
LUT_Names_2 F2
On F2.ID = L2.LUT_NAME_ID
Group by J.ID, J.Description;
```
[Here](http://sqlfiddle.com/#!9/09047/30) is a SQL Fiddle.
In addition, if you have multiple fruits or colors, you are going to get duplicates. For this reason, you want the `distinct` keyword (or to pre-aggregate along each dimension). So the working SQL is more like this:
```
Select J.id,
Concat(J.Description,
'
',
'**Color**:
',
coalesce(group_concat(distinct F.Name SEPARATOR '
'), ''),
'
',
'**Fruit**:
',
coalesce(group_concat(distinct F2.Name SEPARATOR '
'), '')
)
from Main J Left Join
LUT_1 L
On J.ID = L.MainID Left Join
LUT_Names_1 F
On F.ID = L.LUT_NAME_ID Left Join
LUT_2 L2
On J.ID = L2.MainID Left Join
LUT_Names_2 F2
On F2.ID = L2.LUT_NAME_ID
Group by J.ID, J.Description
```
[Here](http://sqlfiddle.com/#!9/b578b1/3) is a SQL Fiddle that illustrates this point. Just remove the `distinct` and see the difference in the results.
Upvotes: 1
|
2018/03/18
| 965 | 3,855 |
<issue_start>username_0: I have this odd issue when building a form in design view for an Access (2016) database. I have various fields with a data type of Yes/No. When creating a new form based on this table and dragging in fields from the Field List a text box appears instead of a checkbox. When I right click on the field the Check Box option in the 'Convert To Check Box' menu entry is greyed out.
I've tried adding a new checkbox and setting the control source to one of the Yes/No fields which does work but it's inconvenient to create and name all of these controls.
I've tried each of the form creation methods (Form, Form Design, Blank Form choosing the control source and Form Wizard) all with the same result. I've tried specifying the default value for the Yes/No fields (No) but that makes no difference.
Is this a bug, or just an awkward design? I'm pretty sure I've created other DBs where the boolean fields have automatically produced checkboxes in design mode. This DB is super simple - it has only one table which I created today so I'm not sure what's going on here.<issue_comment>username_1: Newly created forms in Access (should) automatically copy the control and it's attributes as specified under *Lookup* in table design.
Go to table design for the table you want to adjust, and adjust it according to the following screenshot:

Upvotes: 3 [selected_answer]<issue_comment>username_2: There is no bug, I'm pretty sure you're just misunderstanding something (and your question isn't very clear).
Can you back up a bit and [edit] your question to add a bit of an explanation of what you're trying to do and what the end result will be, also also add an image showing the problem? My main question for you is *why do you need to create so many text boxes?* They should only be created once, and then done. Hide or Show them if needed.
Anyhow, TextBoxes can not be "changed to" Checkboxes because they hold different types of data. You could on the other hand convert between Toggle Buttons, Option Buttons and Checkboxes, to a checkboxes.
If you want checkboxes then don't use the field list to add them fist as text boxes. Instead, just add a checkbox and then assign the field to it.
[](https://i.stack.imgur.com/LIEmT.gif)
Upvotes: -1 <issue_comment>username_3: I also met this problem. For me, it was after moving from brave ol Access 2003 on Windows 7 to Access 2007 on Windows 10.
With 2003, dragging Yes/No fields from the field list onto the form displays checkboxes. It is very convenient when there are many Yes/No fields on a form, much faster than assigning a control source to a bunch of individual unbound checkboxes.
But with Access 2007, I confirm that dragging will drop a textbox ! Whereas dragging lookup fields works normally : it drops comboboxes.
This is a very odd bug !
The only (odd as well!) way I could make Access 2007 come back to normal was to drag at the same time a Combo AND a Checkbox ! Then the next Yes/No fields would be dropped as Checkboxes. Very odd !
I find the 2007 FieldList absolutely horrible compared to 2003
Extremely slow, especially on linked tables.
Stupidly merged with the Property Sheet, so that you cannot have your Field List AND your Property Sheet displaying together.
The Form's Property sheet is also merged with the Query's Property sheet of its record source (there were 2 different property sheets in Access 2003). You can never be sure that edits to a record source will be properly saved !!!
And do not forget to close the Access 2007 Field List before closing your Form, otherwise re-opening another form will take hours on a black Field List !
All in all I am really not happy with Access 2007 / 2010 on Windows 10...
phil
Upvotes: 0
|
2018/03/18
| 341 | 1,155 |
<issue_start>username_0: I was wondering whether it is possible to change the **FOR** loops to **WHILE** loops using C++.
Here is part of the code below.
```
int sum = 0;
for (int i = 0; i < length; i++) {
sum = sum + MyList[i];
}
cout << "The Total is: " <<
sum < largest )
cout << "The Largest Value is: " << largest <
```<issue_comment>username_1: ```
for (beginning; condition; increment) {
body;
}
```
is equivalent to
```
{
beginning;
while (condition) {
body;
increment;
}
}
```
There is not much more to demonstrate.
Upvotes: 1 <issue_comment>username_2: ```
int sum = 0;
int i = 0;
while (i < length ) {
sum = sum + MyList[i];
i++;
}
cout << "The Total is: " < largest )
cout << "The Largest Value is: " << largest <
```
1. your code of for loop will look like this when converted to while loop.
2. In case of "for" loop we initialize, check condition and increment/decrement while passing parameters. Where as in case of "while"loop we initialize first and then pass the condition as parameter
and when loop is about to repeat we increment/decrement
Upvotes: 1 [selected_answer]
|
2018/03/18
| 273 | 1,018 |
<issue_start>username_0: I just deleted all unnecessary files by deleting them in the directory.
How do I delete all deleted files with git without individually git rm-ing each of the file names?<issue_comment>username_1: you can also just do "git add ." then all deleted files within the directory get status updated and you can commit it
Upvotes: 0 <issue_comment>username_2: Since these are now changed files, `git commit -a` will commit all these deletions at once.
For the future: use `git rm` instead of just deleting your files. It not only removes files from the index, but also deletes them from disk.
Upvotes: 1 <issue_comment>username_3: `git rm` is in effect the same as doing `rm && git add` . You already did the first part of this (`rm` ), so what's left is `git add` .
To add all changes (a deletion counts as a change) while avoiding any files not yet tracked, use `git add -u`.
More info at [man git-rm](https://gitirc.eu/git-rm.html), [man git-add](https://gitirc.eu/git-add.html).
Upvotes: 1
|
2018/03/18
| 317 | 1,113 |
<issue_start>username_0: I need to load datasource properties from properties file
db.properties:
```
url = my_url
user = user_name
password = <PASSWORD>
```
this is dataSource (camelcontext.xml):
I'm trying like this, it is not working.
```
```
My routes are implemented in java dsl.<issue_comment>username_1: you can also just do "git add ." then all deleted files within the directory get status updated and you can commit it
Upvotes: 0 <issue_comment>username_2: Since these are now changed files, `git commit -a` will commit all these deletions at once.
For the future: use `git rm` instead of just deleting your files. It not only removes files from the index, but also deletes them from disk.
Upvotes: 1 <issue_comment>username_3: `git rm` is in effect the same as doing `rm && git add` . You already did the first part of this (`rm` ), so what's left is `git add` .
To add all changes (a deletion counts as a change) while avoiding any files not yet tracked, use `git add -u`.
More info at [man git-rm](https://gitirc.eu/git-rm.html), [man git-add](https://gitirc.eu/git-add.html).
Upvotes: 1
|
2018/03/18
| 853 | 2,382 |
<issue_start>username_0: So my input values are as follows:
```
temp_dict1 = {'A': [1,2,3,4], 'B':[5,5,5], 'C':[6,6,7,8]}
temp_dict2 = {}
val = [5]
```
The list `val` may contain more values, but for now, it only contains one. My desired outcome is:
```
>>>temp_dict2
{'B':[5]}
```
The final dictionary needs to only have the keys for the lists that contain the item in the list `val`, and only unique instances of that value in the list. I've tried iterating through the two objects as follows:
```
for i in temp_dict1:
for j in temp_dict1[i]:
for k in val:
if k in j:
temp_dict2.setdefault(i, []).append(k)
```
But that just returns an `argument of type 'int' is not iterable` error message. Any ideas?<issue_comment>username_1: Changed your dictionary to cover some more cases:
```
temp_dict1 = {'A': [1,2,3,4], 'B':[5,5,6], 'C':[6,6,7,8]}
temp_dict2 = {}
val = [5, 6]
for item in val:
for key, val in temp_dict1.items():
if item in val:
temp_dict2.setdefault(key, []).append(item)
print(temp_dict2)
# {'B': [5, 6], 'C': [6]}
```
Or, the same using list comprehension (looks a bit hard to understand, not recommended).
```
temp_dict2 = {}
[temp_dict2.setdefault(key, []).append(item) for item in val for key, val in temp_dict1.items() if item in val]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: For comparison with [@KeyurPotdar's solution](https://stackoverflow.com/a/49349032/9209546), this can also be achieved via `collections.defaultdict`:
```
from collections import defaultdict
temp_dict1 = {'A': [1,2,3,4], 'B':[5,5,6], 'C':[6,6,7,8]}
temp_dict2 = defaultdict(list)
val = [5, 6]
for i in val:
for k, v in temp_dict1.items():
if i in v:
temp_dict2[k].append(i)
# defaultdict(list, {'B': [5, 6], 'C': [6]})
```
Upvotes: 1 <issue_comment>username_3: You can try this approach:
```
temp_dict1 = {'A': [1,2,3,4,5,6], 'B':[5,5,5], 'C':[6,6,7,8]}
val = [5,6]
def values(dict_,val_):
default_dict={}
for i in val_:
for k,m in dict_.items():
if i in m:
if k not in default_dict:
default_dict[k]=[i]
else:
default_dict[k].append(i)
return default_dict
print(values(temp_dict1,val))
```
output:
```
{'B': [5], 'C': [6], 'A': [5, 6]}
```
Upvotes: 0
|
2018/03/18
| 542 | 1,934 |
<issue_start>username_0: i'm trying import pygame on anaconda. I have try it by open spyder, type code as follows:
```
import pygame
```
Then something happens as follows:
```
Traceback (most recent call lasat):
file "", line 1, in
ModuleNotFoundError: No module named 'pygame'
```
I have done some research on google about this. i think it has something to do with this link: <https://anaconda.org/CogSci/pygame>.
I have download file on this link. but i don't know what to do about it.
Why can't i import pygame? How can i solve this problem?<issue_comment>username_1: Forget about my last answer. I got what you're doing wrong. The modules/packages installations must happen in a cmd, not from the python shell. You're trying to install it from the python shell directly. Just open a cmd window and type
```
conda install pygame
```
and it should work.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Go to the start menu > anaconda3 > anaconda prompt
in there type `pip install pygame`
I tried many solutions and this was the only one that worked for me.
Upvotes: 0 <issue_comment>username_3: 1. [Activate your environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment)
```
conda activate myenv
```
2. [Install Pygame in Conda](https://anaconda.org/cogsci/pygame)
```
conda install -c cogsci pygame
```
Upvotes: 2 <issue_comment>username_4: The command listed earlier:
```
conda install -c cogsci pygame
```
will install pygame 1.9.2a0 which is an older package. I was able to find 1.9.5 at this channel:
```
conda install -c delichon pygame
```
But, since that version might be old by the time you read this, you might be better off going to <https://anaconda.org/anaconda/repo> and searching for pygame there. That way you'll be able to see which versions are available on which channel (like cogsci or delichon).
Upvotes: 3
|
2018/03/18
| 708 | 2,536 |
<issue_start>username_0: I've got XML source that I'm trying to catalog with Glue:
```
xml version="1.0"?
XML Developer's Guide
Computer
44.95
2000-10-01
An in-depth look at creating applications with XML.
Gambardella, Matthew
...
```
The book entries are being picked up just fine, but how do I configure my crawler / classifier(s) to recognize the nesting under ?<issue_comment>username_1: You shouldn't need to specify a classifier unless you want it to only capture that row tag. Then you would want to just set the classifier to author. However, you will not get the rest of the data.
If you use a crawler to infer the schema of the data. It will capture what is in authors as a struct type. See the picture below:
[](https://i.stack.imgur.com/GbXdZ.png)
Now you can map the field within the glue job:
[](https://i.stack.imgur.com/q6ncD.png)
Or just access the field itself within the job's code like this:
[](https://i.stack.imgur.com/wkmNn.png)
Upvotes: 2 <issue_comment>username_2: We had a lot of trouble loading nested XML data into the `DynamicFrame`. The problem is that you cannot use a standard Spark (PySpark in our case) `XPATH` Hive DDL statements to load the `DataFrame` (`DynamicFrame` in case of AWS GLUE).
Our solution was to load the `DynamicFrame` just using the naive and only `RowTag` parameter in the **Table Properties** (not in the **Serde Parameters** as the Crawler suggested). This will give you a single `dynamicRecord['MySingleParsedField']` that you can then iterate inside one of the Spark (GLUE) jobs to populate new fields. Here there is a working example of such iteration code:
```
def Map_Inital_Fields(dynamicRecord):
nested = []
for item in dynamicRecord['MySingleParsedField']:
nested.append(item)
dynamicRecord['title'] = [item.get('title') for item in nested[0].get('book')][0]
dynamicRecord['price'] = [item.get('price') for item in nested[0].get('book')][0]
del dynamicRecord['MySingleParsedField']
return dynamicRecord
mapfields01 = Map.apply(frame = selectfields2, f = Map_Inital_Fields, transformation_ctx = "mapfields01")
```
This is just an example, but basically, once you have the xml parsed object as a field in the `DynamicFrame` you can think on it as a Python object (a dic) and then modify it as you need.
Upvotes: 1
|
2018/03/18
| 1,668 | 5,302 |
<issue_start>username_0: I have been working for some functionality in checkboxes, but this makes me bit confused when changing the styles of checkbox.
I tried setting the color to the checkbox after clicked, but not worked.
```
.container input:checked ~ .checkmark:after {
display: block;
color: #000 !important;
}
```
This seems to be bit easy, but somehow the tickbox color (white) cannot be changed. Can someone help me out?
Here's the code snippet.
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default checkbox */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
/* Create a custom checkbox */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the checkbox is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the checkmark/indicator (hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the checkmark when checked */
.container input:checked ~ .checkmark:after {
display: block;
color: #000 !important;
}
/* Style the checkmark/indicator */
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
One
Two
Three
Four
```<issue_comment>username_1: The checkmark is being created using a border color, here:
```
.container .checkmark:after {
border: solid white;
}
```
So just change that color as desired:
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default checkbox */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
/* Create a custom checkbox */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the checkbox is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the checkmark/indicator (hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the checkmark when checked */
.container input:checked ~ .checkmark:after {
display: block;
}
/* Style the checkmark/indicator */
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid #000;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
One
Two
Three
Four
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're trying to change the colour of the tick, you have to change the border colour.
```
.container input:checked ~ .checkmark:after {
display: block;
border-color: #000;
}
```
```css
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default checkbox */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
/* Create a custom checkbox */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the checkbox is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the checkmark/indicator (hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the checkmark when checked */
.container input:checked ~ .checkmark:after {
display: block;
border-color: #000;
}
/* Style the checkmark/indicator */
.container .checkmark:after {
left: 9px;
top: 5px;
width: 5px;
height: 10px;
border: solid white;
border-width: 0 3px 3px 0;
-webkit-transform: rotate(45deg);
-ms-transform: rotate(45deg);
transform: rotate(45deg);
}
```
```html
One
Two
Three
Four
```
Upvotes: 1
|
2018/03/18
| 1,027 | 4,476 |
<issue_start>username_0: I have been working on machine learning problems lately as part of my internship. So far I have been using Tensorflow with python because that's what I am most comfortable with. Once a problem is solved using deep learning, I am left with the architecture of the network and the weights. Now, my problem is, how can I deploy my solution in production? I won't be using tensorflow serving because it is mainly for huge applications where you set a remote server and your developed application will make requests to this server. In my case, I just want to develop a machine learning solution and integrate it into an already existing software that uses C++ with visual studio 2017.
So far and after a lot of research, I have some solutions in mind :
1) Using the "dnn" module from OpenCV : this module can load graphs and you can do inference and other operations (like extracting a specific layer from the network at run time). This module seemed very promising but then I started facing some problems when using networks that are a little bit different from the one used in the example described in OpenCV github, they used "inception5h" for the example and when I tried to load "inception\_v3" there was an error about some unknown layer in the network, namely the JPEG\_decode layer.
2) Building tensorflow from source and using it directly with C++. This solution seemed like the best one but then I encountered so many problems with parts of my code not compiling while others do. I am using Visual Studio 2017 with Windows 10. So although I was able to build tensorflow from source, I wasn't able to compile all parts of my code, in fact it wasn't even my code, it was an example from tensorflow website, this one : [tensorflow C++ example](https://www.tensorflow.org/api_guides/cc/guide).
3) Another possibility that I am entertaining is using tensorflow for designing the solution and then using another machine learning framework such as Caffe2, CNTK...etc for deployment into production. I have found some possibilities to convert graphs from one framework to another here : [models converters](https://github.com/ysh329/deep-learning-model-convertor). I thought that this could be a reasonable solution because all I have to do is find the framework most compatible with windows and just do a model conversion once I finish designing my solution in tensorflow and python. The conversion process though seems a little too good, am I wrong?
4) A final possibility that I am thinking of is using CPython. So basically, I will create my the pipeline for prediction in python, wrap in some python functions then use in my Visual Studio project and make calls to those functions using C++, here's an example : [embedding python in C++](https://docs.python.org/3.1/extending/embedding.html). I have never used a solution like this before and I am not sure about all the things that could go wrong.
So basically, what do you think is the best solution to deploy a machine learning solution into an already existing project on Visual Studio that uses C++? Am I missing a better solution? Any guidelines or hints are greatly appreciated!<issue_comment>username_1: I used CNTK from the beginning because I just wanted to stay in my C++ world in Visual Studio, and knew that I wanted to deploy as part of my C++ desktop App. No Tensorflow, no Python, no cloud, not even .NET, and no translating models. Just do it in CNTK from the start. I have a commercial product now using Deep Learning. Cool!
Upvotes: 0 <issue_comment>username_2: I ended up using solution 2. After the new updates from tensorflow, it's now easier to build tensorflow from source on Windows. With this solution, I didn't need to worry about the compatibility of my models since I use tensorflow with python for prototyping and I use it with C++ for production.
[EDIT] : In 2021, I am now using ONNX Runtime (ORT) for deploying my models in production as part of a C++ application. The documentation for ORT is not great but the tool itself is very good.
Upvotes: 3 [selected_answer]<issue_comment>username_3: I'd consider exporting your NN model (which is not restricted to tensorflow) using ONNX to Intel Vino or TensorRT in order to export your model to C++ for optimized CPU or optimized GPU
It's states [here](https://hackernoon.com/optimizing-neural-networks-for-production-with-intels-openvino-a7ee3a6883d) that Intel Vino is twice as fast as tensorflow
Upvotes: 0
|
2018/03/18
| 643 | 2,231 |
<issue_start>username_0: I really can't find out what i'm doing wrong!
```js
setInterval(function() {
var time = document.getElementById("timer").innerHTML;
var x = time * 1000 - 400;
setTimeout(function() {
var x2 = document.getElementById("2x-total").innerHTML;
var x3 = document.getElementById("3x-total").innerHTML;
if (x2 < 2 * x3) {
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
var run = document.getElementById("btn2x");
run.dispatchEvent(evt);
}
}, x);
}, 7000);
function x2a() {
document.getElementById("x2p").innerHTML = x2;
}
function x3a() {
document.getElementById("x3p").innerHTML = x3;
}
```
```html
01.14
2x
3x
12000
12000
```
the code should run in a loop, first wait 7 seconds and than run the script,the script should search the timer value and wait till there are just 400 miliseconds and press on the correct button, but it crashes, what could I do?<issue_comment>username_1: First remove while(1) {} loop.
Upvotes: 0 <issue_comment>username_2: The issue is that you're declaring your `x2` and `x3` variables inside of the `setInterval` anonymous method. Try this:
```
var x2;
var x3;
setInterval(function() {
var time = document.getElementById("timer").innerHTML;
var x = time * 1000 - 400;
setTimeout(function() {
x2 = document.getElementById("2x-total").innerHTML;
x3 = document.getElementById("3x-total").innerHTML;
if (x2 < 2 * x3) {
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
var run = document.getElementById("btn2x");
run.dispatchEvent(evt);
}
}, x);
}, 7000);
function x2a() {
document.getElementById("x2p").innerHTML = x2;
}
function x3a() {
document.getElementById("x3p").innerHTML = x3;
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 3,206 | 9,243 |
<issue_start>username_0: I was trying to retrieve a file from SFTP server - while connecting to this SFTP works with FileZilla - SFTP method.
For JSch I have tried this code using JSch
```
JSch jsch = new JSch();
Properties config = new Properties();
config.put("StrictHostKeyChecking", "no");
config.put("PreferredAuthentications", "password");
Session session = jsch.getSession( "super_robot", "192.192.192.com", 6222 );
session.setConfig(config );
session.setPassword( "<PASSWORD>@" );
session.connect( 30000 );
```
Though I got this error:
```none
Caused by: com.jcraft.jsch.JSchException: SSH_MSG_DISCONNECT: 11 Internal server error.
at com.jcraft.jsch.Session.read(Session.java:1004)
at com.jcraft.jsch.Session.connect(Session.java:323)
```
With `StandardFileSystemManager` I'm also having the same error:
```
SftpFileSystemConfigBuilder.getInstance().setTimeout(opts, connectionTimeout);
SftpFileSystemConfigBuilder.getInstance().setIdentityInfo(opts);
SftpFileSystemConfigBuilder.getInstance().setStrictHostKeyChecking(opts, "no");
StandardFileSystemManager fsManager = new StandardFileSystemManager();
fsManager.init();
```
Got me this error:
```none
Caused by: org.apache.commons.vfs2.FileSystemException: Could not connect to SFTP server at "digital.crossix.com".
at org.apache.commons.vfs2.provider.sftp.SftpClientFactory.createConnection(SftpClientFactory.java:147)
at org.apache.commons.vfs2.provider.sftp.SftpFileProvider.doCreateFileSystem(SftpFileProvider.java:79)
... 40 more
Caused by: com.jcraft.jsch.JSchException: SSH_MSG_DISCONNECT: 11 Internal server error.
at com.jcraft.jsch.Session.read(Session.java:1004)
at com.jcraft.jsch.Session.connect(Session.java:323)
at com.jcraft.jsch.Session.connect(Session.java:183)
at org.apache.commons.vfs2.provider.sftp.SftpClientFactory.createConnection(SftpClientFactory.java:145)
... 41 more
```
I have tried other solution offered in these links which didn't solve it:
* [SFTP connector throwing error while giving encrytped passwords in properties file](https://stackoverflow.com/questions/43070939/sftp-connector-throwing-error-while-giving-encrytped-passwords-in-properties-fil)
* [com.jcraft.jsch.JSchException: SSH\_MSG\_DISCONNECT: 2 protocol error: rcvd type 90](https://stackoverflow.com/questions/44653214/com-jcraft-jsch-jschexception-ssh-msg-disconnect-2-protocol-error-rcvd-type-9)
* [com.jcraft.jsch.JSchException: SSH\_MSG\_DISCONNECT: 11 No appropriate prime between 1024 and 1024 is available. en](https://stackoverflow.com/questions/34351298/com-jcraft-jsch-jschexception-ssh-msg-disconnect-11-no-appropriate-prime-betwe)
* [java jsch SSH\_MSG\_DISCONNECT Failed to read binary packet data](https://stackoverflow.com/questions/32307009/java-jsch-ssh-msg-disconnect-failed-to-read-binary-packet-data)
Note that none of these errors is exactly the same as the error I'm facing - they are a bit similar.
Didn't found someone that had the exact error I'm facing.
When I was working with FileZilla this is the log I was seeing in console:
```none
Status: Connecting to 192.192.com:6222...
Trace: CControlSocket::SendNextCommand()
Trace: CSftpConnectOpData::Send() in state 0
Trace: Going to execute C:\Program Files\FileZilla FTP Client\fzsftp.exe
Response: fzSftp started, protocol_version=8
Trace: CSftpConnectOpData::ParseResponse() in state 0
Trace: CControlSocket::SendNextCommand()
Trace: CSftpConnectOpData::Send() in state 3
Command: open "<EMAIL>" 6222
Trace: Connecting to 192.192.194 port 6222
Trace: We claim version: SSH-2.0-FileZilla_3.30.0
Trace: Server version: SSH-2.0-NuaneSSH_0.8.1.0
Trace: Using SSH protocol version 2
Trace: Doing Diffie-Hellman group exchange
Trace: Doing Diffie-Hellman key exchange with hash SHA-1
Trace: Server also has ssh-dss host key, but we don't know it
Trace: Host key fingerprint is:
Trace: ssh-rsa 1024 ------------------------------------
Command: Trust new Hostkey: Once
Trace: Initialised AES-256 CBC client->server encryption
Trace: Initialised HMAC-SHA1 client->server MAC algorithm
Trace: Initialised AES-256 CBC server->client encryption
Trace: Initialised HMAC-SHA1 server->client MAC algorithm
Trace: Attempting keyboard-interactive authentication
Trace: Using keyboard-interactive authentication. inst_len: 0, num_prompts: 1
Command: Pass: ********
Trace: Access granted
Trace: Opening session as main channel
Trace: Opened main channel
Trace: Started a shell/command
Status: Connected to 192.192.com
Trace: CSftpConnectOpData::ParseResponse() in state 3
Trace: CControlSocket::ResetOperation(0)
Trace: CSftpConnectOpData::Reset(0) in state 3
Trace: CFileZillaEnginePrivate::ResetOperation(0)
Status: Retrieving directory listing...
Trace: CControlSocket::SendNextCommand()
Trace: CSftpListOpData::Send() in state 0
Trace: CSftpChangeDirOpData::Send() in state 0
Trace: CSftpChangeDirOpData::Send() in state 1
Command: pwd
Response: Current directory is: "/"
Trace: CSftpChangeDirOpData::ParseResponse() in state 1
Trace: CControlSocket::ResetOperation(0)
Trace: CControlSocket::ParseSubcommandResult(0)
Trace: CSftpListOpData::SubcommandResult() in state 1
Trace: CControlSocket::SendNextCommand()
Trace: CSftpListOpData::Send() in state 2
Trace: CSftpListOpData::Send() in state 3
Command: ls
Status: Listing directory /
Trace: CSftpListOpData::ParseResponse() in state 3
Trace: CControlSocket::ResetOperation(0)
Status: Directory listing of "/" successful
Trace: CFileZillaEnginePrivate::ResetOperation(0)
```
What can I do to solve this error?
Attaching also the log from JSch
```none
INFO: Connection established
INFO: Remote version string: SSH-2.0-NuaneSSH_0.8.1.0
INFO: Local version string: SSH-2.0-JSCH-0.1.54
INFO: CheckCiphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-ctr,arcfour,arcfour128,arcfour256
INFO: aes256-ctr is not available.
INFO: aes192-ctr is not available.
INFO: aes256-cbc is not available.
INFO: aes192-cbc is not available.
INFO: CheckKexes: diffie-hellman-group14-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
INFO: CheckSignatures: ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
INFO: SSH_MSG_KEXINIT sent
INFO: SSH_MSG_KEXINIT received
INFO: kex: server: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
INFO: kex: server: ssh-rsa,ssh-dss
INFO: kex: server: aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc
INFO: kex: server: aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc
INFO: kex: server: hmac-sha1,hmac-md5
INFO: kex: server: hmac-sha1,hmac-md5
INFO: kex: client: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
INFO: kex: client: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
INFO: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc
INFO: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc
INFO: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96
INFO: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96
INFO: kex: server->client aes128-cbc hmac-md5 none
INFO: kex: client->server aes128-cbc hmac-md5 none
INFO: SSH_MSG_KEXDH_INIT sent
INFO: expecting SSH_MSG_KEXDH_REPLY
INFO: Disconnecting from 192.192.com port 6222
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:65)
Caused by: com.jcraft.jsch.JSchException: SSH_MSG_DISCONNECT: 11 Internal server error.
at com.jcraft.jsch.Session.read(Session.java:1004)
at com.jcraft.jsch.Session.connect(Session.java:323)
at com.xxxxxxx.yyyy.service.SFTPTest.main(SFTPProviderTest.java:50)
... 5 more
```<issue_comment>username_1: First remove while(1) {} loop.
Upvotes: 0 <issue_comment>username_2: The issue is that you're declaring your `x2` and `x3` variables inside of the `setInterval` anonymous method. Try this:
```
var x2;
var x3;
setInterval(function() {
var time = document.getElementById("timer").innerHTML;
var x = time * 1000 - 400;
setTimeout(function() {
x2 = document.getElementById("2x-total").innerHTML;
x3 = document.getElementById("3x-total").innerHTML;
if (x2 < 2 * x3) {
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
var run = document.getElementById("btn2x");
run.dispatchEvent(evt);
}
}, x);
}, 7000);
function x2a() {
document.getElementById("x2p").innerHTML = x2;
}
function x3a() {
document.getElementById("x3p").innerHTML = x3;
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 387 | 1,332 |
<issue_start>username_0: I have two classes that are defined in different ways like below:
```
template class Stack {
T data[N];
};
template // Both defaulted
class Stack {
T data[N];
};
```
I want to know if these are two different ways to define a class, or do they have a different meaning?<issue_comment>username_1: First remove while(1) {} loop.
Upvotes: 0 <issue_comment>username_2: The issue is that you're declaring your `x2` and `x3` variables inside of the `setInterval` anonymous method. Try this:
```
var x2;
var x3;
setInterval(function() {
var time = document.getElementById("timer").innerHTML;
var x = time * 1000 - 400;
setTimeout(function() {
x2 = document.getElementById("2x-total").innerHTML;
x3 = document.getElementById("3x-total").innerHTML;
if (x2 < 2 * x3) {
var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
var run = document.getElementById("btn2x");
run.dispatchEvent(evt);
}
}, x);
}, 7000);
function x2a() {
document.getElementById("x2p").innerHTML = x2;
}
function x3a() {
document.getElementById("x3p").innerHTML = x3;
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 454 | 1,693 |
<issue_start>username_0: I'm new in Ionic and i try to use the ion-select dynamically with preselect option.
I just want edit my profile with some value. In this example i can set multiple language that the user can speak.
First i get all languages set in my database with my API and i set my result in var LanguagesData.
[My result in console.log](https://i.stack.imgur.com/CpWJb.png)
Second i get currently languages spoken by my user and set the result in currentLanguages
[My result in console.log](https://i.stack.imgur.com/N7exj.png)
Third my code :
```html
{{language.language}}
```
My values are not preselect.
If i replace `[(ngModel)]="currentLanguage"` by `[(ngModel)]="LanguagesData"` all values are preselect.
Can you explain me what i did wrong ?
Thanks.<issue_comment>username_1: ```
{{language.language}}
```
You were missing `[value]="language.language"`
Hope it solves your proble.
Upvotes: 0 <issue_comment>username_2: I resolve my problem with a tricks (i'm not proud of it).
Currently i get my 2 arrays of data from api call.
If i use the base array it does not work.
But if i do someting like that :
```html
let i = 0;
let j = 0;
while (i < this.currentLanguage.length) {
while (j < this.languagesData.length) {
if (this.currentLanguage[i].id == this.languagesData[j].id)
this.finalLanguage.push(this.languagesData[j]);
j += 1;
}
j = 0;
i += 1;
}
```
It works when i use finalLanguage instead of currentLanguage.
I don't really understand why ...
Upvotes: 1
|
2018/03/18
| 777 | 3,070 |
<issue_start>username_0: The exact warning I get is
```
warning C4715: "spv::Builder::makeFpConstant": not all control paths return a value
```
and `spv::Builder::makeFpConstant` is
```
Id Builder::makeFpConstant(Id type, double d, bool specConstant)
{
assert(isFloatType(type));
switch (getScalarTypeWidth(type)) {
case 16:
return makeFloat16Constant(d, specConstant);
case 32:
return makeFloatConstant(d, specConstant);
case 64:
return makeDoubleConstant(d, specConstant);
}
assert(false);
}
```
Can anyone help me?<issue_comment>username_1: Well, the warning message says exactly what's wrong.
If `NDEBUG` is defined (i.e. `assert`s are disabled) and neither of the `case`s is taken, then the control reaches the end of function before any of the `return` statements (which results in undefined behaviour).
Upvotes: 1 <issue_comment>username_2: An `assert` does not count as a piece of control flow logic. It's just a "documenting" contract check. In release builds it doesn't even happen! It's a debug tool, only.
So, you're left with a function that omits a `return` statement if the scalar type width is not 16, 32 or 64. This means your program has undefined behaviour. The warning is telling you that you need to return something in *all* cases, even if you think the other cases won't happen at runtime.
In this case I'd probably throw an exception if you get past the `switch` without returning — this may then be handled like any other exceptional case. An alternative if you don't want exceptions is to call `std::terminate` yourself (which is what the assertion will ultimately do in a debug build).
However, terminating is a bit of a nuclear option and you don't really want your program terminating in production; you want it kicking out a proper diagnostic message via your existing exception handling channels, so that when your customer reports a bug, they can say "apparently makeFpConstant got a value it didn't expect" and you know what to do. And if you don't want to leak function names to customers then you could at least pick some "secret" failure code that only your team/business knows (and document it internally!).
Terminating in a debug build is often fine, though, so leave that `assert` in too! Or just rely on the exception that you now have, which will result in a termination anyway if you don't catch it.
Here's how I'd probably write the function:
```
Id Builder::makeFpConstant(
const Id type,
const double d,
const bool specConstant
)
{
assert(isFloatType(type));
const auto width = getScalarTypeWidth(type);
switch (width) {
case 16: return makeFloat16Constant(d, specConstant);
case 32: return makeFloatConstant(d, specConstant);
case 64: return makeDoubleConstant(d, specConstant);
}
// Shouldn't get here!
throw std::logic_error(
"Unexpected scalar type width "
+ std::to_string(width)
+ " in Builder::makeFpConstant"
);
}
```
Upvotes: 2
|
2018/03/18
| 803 | 2,743 |
<issue_start>username_0: So, I'm attempting to fork some open source code and upon compilation I am greeted with these errors:
>
> C2039 'TransactionId': is not a member of 'CryptoNote'
>
>
> C2061 syntax error: identifier 'TransactionId'
>
>
>
I'm relatively inexperienced with `C++` usually confining myself to the realms of `C#`, however, I can clearly see that `TransactionId` is a `typedef` declared in a different file like so:
```
namespace CryptoNote {
typedef size_t TransactionId;
typedef size_t TransferId;
//more code
```
And the line throwing the error is:
```
void sendTransactionCompleted(CryptoNote::TransactionId _id, bool _error, const QString& _error_text);
```
To my inexperienced eyes, that looks as though `TransactionID` is definitly a member of `Cryptonote` is it not?
Any ideas what's going on?
The repo is here: <https://github.com/hughesjs/Incendium_GUI>
And the necessary submodule is here: <https://github.com/hughesjs/Incendium_Crypt><issue_comment>username_1: It's difficult to say without seeing all the code but a few things come to mind:
* Firstly is this the first error you get. Compilation errors with C++ tend to result in a bunch of secondary errors. For example the following results in a similar error to what you see but fails to compile because `size_t` has not been defined:
namespace CryptoNote {
typedef size\_t TransactionId;
typedef size\_t TransferId;
}
int main(void)
{
CryptoNote::TransactionId id;
return 0;
}
$ g++ -std=c++11 namespace.cxx -o namespace
namespace.cxx:4:9: error: ‘size\_t’ does not name a type
typedef size\_t TransactionId;
^~~~~~
namespace.cxx:5:9: error: ‘size\_t’ does not name a type
typedef size\_t TransferId;
^~~~~~
namespace.cxx: In function ‘int main()’:
namespace.cxx:11:17: error: ‘TransactionId’ is not a member of ‘CryptoNote’
CryptoNote::TransactionId id;
^~~~~~~~~~~~~
See <http://www.cplusplus.com/reference/cstring/size_t/> for a list of headers that define `size_t`.
* Is `CryptoNote` nested inside another namespace?
* Is there another `CryptoNote` defined in the namespace your function is declared in?
* Are these in the same header file? If not, is the header file where the namespace is defined included in the header file containing the function declaration?
Upvotes: 0 <issue_comment>username_2: Those typedefs are defined in `Incendium_Crypt/include/IWalletLegacy.h`.
```
void sendTransactionCompleted(CryptoNote::TransactionId _id, bool _error, const QString& _error_text);`
```
is defined in `Incendium_GUI/src/gui/SendFrame.h`, which includes `IWallet.h`. However, `IWallet.h` does **not** in turn include `IWalletLegacy.h`. Hence, those typedefs are unknown to `SendFrame.h`.
Upvotes: 2 [selected_answer]
|
2018/03/18
| 597 | 2,142 |
<issue_start>username_0: For my public google bucket I used
```
img src="https://storage.cloud.google.com/[bucketname]/[innerpath]/[filename]"
```
it was very handy because I only needed "inner path" and filename to make "full image url", but turns out its only working when you signed with Google Acccount.
Is there another way to do it without setting signedURL for each file?<issue_comment>username_1: It's difficult to say without seeing all the code but a few things come to mind:
* Firstly is this the first error you get. Compilation errors with C++ tend to result in a bunch of secondary errors. For example the following results in a similar error to what you see but fails to compile because `size_t` has not been defined:
namespace CryptoNote {
typedef size\_t TransactionId;
typedef size\_t TransferId;
}
int main(void)
{
CryptoNote::TransactionId id;
return 0;
}
$ g++ -std=c++11 namespace.cxx -o namespace
namespace.cxx:4:9: error: ‘size\_t’ does not name a type
typedef size\_t TransactionId;
^~~~~~
namespace.cxx:5:9: error: ‘size\_t’ does not name a type
typedef size\_t TransferId;
^~~~~~
namespace.cxx: In function ‘int main()’:
namespace.cxx:11:17: error: ‘TransactionId’ is not a member of ‘CryptoNote’
CryptoNote::TransactionId id;
^~~~~~~~~~~~~
See <http://www.cplusplus.com/reference/cstring/size_t/> for a list of headers that define `size_t`.
* Is `CryptoNote` nested inside another namespace?
* Is there another `CryptoNote` defined in the namespace your function is declared in?
* Are these in the same header file? If not, is the header file where the namespace is defined included in the header file containing the function declaration?
Upvotes: 0 <issue_comment>username_2: Those typedefs are defined in `Incendium_Crypt/include/IWalletLegacy.h`.
```
void sendTransactionCompleted(CryptoNote::TransactionId _id, bool _error, const QString& _error_text);`
```
is defined in `Incendium_GUI/src/gui/SendFrame.h`, which includes `IWallet.h`. However, `IWallet.h` does **not** in turn include `IWalletLegacy.h`. Hence, those typedefs are unknown to `SendFrame.h`.
Upvotes: 2 [selected_answer]
|
2018/03/18
| 1,815 | 5,750 |
<issue_start>username_0: I am trying to save JSON data into my Class. I've tried different ways but didn't succeed.
Here is my class:
```
class YoCurrency
{
public string Currency { get; set; }
public double high { get; set; }
public double low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
[JsonProperty("vol_cur")]
public double vol_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
```
Here is [JSON file.](https://yobit.net/api/3/ticker/btc_usd-eth_usd-doge_usd-pac_usd-rdd_usd)
Trying to deserialize with generic List:
```
List a = JsonConvert.DeserializeObject>(json);
```
gives an error:
>
> Newtonsoft.Json.JsonSerializationException: 'Cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.Collections.Generic.List`1[YoBitParser.YobitCurrency]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
> To fix this error either change the JSON to a JSON array (e.g. [1,2,3]) or change the deserialized type so that it is a normal .NET type (e.g. not a primitive type like integer, not a collection type like an array or List) that can be deserialized from a JSON object. JsonObjectAttribute can also be added to the type to force it to deserialize from a JSON object.
> Path 'btc\_usd', line 1, position 11.'
>
>
>
Deserializing this way:
```
var a = JsonConvert.DeserializeObject(json);
```
Doesn't give any result, because all class fields get 0 value
Also I tried to use two classes like this:
```
class YoCurrency
{
public double high { get; set; }
public double low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
[JsonProperty("vol_cur")]
public double vol_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
class YoPair
{
YoCurrency Currency { get; set; }
string Name { get; set; }
}
```
But it didn't give any positive result. I tried to generate c# classes using <http://json2csharp.com>, but it generates a class for every currency:
```
public class BtcUsd
{
public double high { get; set; }
public double low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
public double vol_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
public class EthUsd
{
...
}
public class RootObject
{
public BtcUsd btc_usd { get; set; }
public EthUsd eth_usd { get; set; }
...
}
```
But i suppose it's not good way in case of 50 or 500 pairs (or should I create a unique class for every currency?)
So please help me to deserialize this JSON or give me some information that could help me to solve this problem.<issue_comment>username_1: Why not just make one class?
```
class Program
{
static void Main(string[] args)
{
var array = "[{\"btc_usd\":[{\"high\":8550.0102,\"low\":7800,\"avg\":8175.0051,\"vol\":1615543.57397705,\"vol_cur\":197.54076079,\"last\":7850,\"buy\":7850.00000000,\"sell\":7879.00000000,\"updated\":1521383863}],\"eth_usd\":[{\"high\":622.93708559,\"low\":482,\"avg\":552.46854279,\"vol\":346598.40520112,\"vol_cur\":630.37075493,\"last\":488.27857735,\"buy\":489.77564548,\"sell\":492.11726255,\"updated\":1521383876}]}]";
List a = JsonConvert.DeserializeObject>(array);
}
}
public class RootCurrency
{
public double high { get; set; }
public int low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
public double vol\_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
public class RootObject
{
public List btc\_usd { get; set; }
public List eth\_usd { get; set; }
}
```
Upvotes: 1 <issue_comment>username_2: It look like your class structure is not proper, suggest you to generate class with the help of Visual Studio like as below and try your code by using generated class
[](https://i.stack.imgur.com/xNpS2.png)
based on your json strucutre you need class design like this
```
public class Rootobject
{
// do List if you are expecting multiple element
public CurrencyDetail btc\_usd { get; set; }
public CurrencyDetail eth\_usd { get; set; }
public CurrencyDetail doge\_usd { get; set; }
public CurrencyDetail pac\_usd { get; set; }
public CurrencyDetail rdd\_usd { get; set; }
}
public class CurrencyDetail
{
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
my suggesting is if possible modify you json , which will contain currency type in currency detail than you class strucutre will be like
```
public class Rootobject
{
public List currencies { get; set; }
}
public class CurrencyDetail
{
//btc\_usd or eth\_usd or doge\_usd or pac\_usd
public string type { get; set; }
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
Upvotes: 0
|
2018/03/18
| 1,102 | 3,240 |
<issue_start>username_0: Alright so I stuck on the code reading docs.
Iam starting with JS so go easy one me =].
I've got and array calld `Area`
which contains few arguments
```
let Area = ["Kanig Village", "Fort Chune", "Shadowy Heights", ...];
```
I cannot change this array till specific part of my code is executed but after I wish to add to every position another value. How to I do that to get exacly like that :
```
let Area = ["Kanig Village 14:30", "Fort Chune 15:30", "Shadowy Heights 16:30", ...];
```<issue_comment>username_1: Why not just make one class?
```
class Program
{
static void Main(string[] args)
{
var array = "[{\"btc_usd\":[{\"high\":8550.0102,\"low\":7800,\"avg\":8175.0051,\"vol\":1615543.57397705,\"vol_cur\":197.54076079,\"last\":7850,\"buy\":7850.00000000,\"sell\":7879.00000000,\"updated\":1521383863}],\"eth_usd\":[{\"high\":622.93708559,\"low\":482,\"avg\":552.46854279,\"vol\":346598.40520112,\"vol_cur\":630.37075493,\"last\":488.27857735,\"buy\":489.77564548,\"sell\":492.11726255,\"updated\":1521383876}]}]";
List a = JsonConvert.DeserializeObject>(array);
}
}
public class RootCurrency
{
public double high { get; set; }
public int low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
public double vol\_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
public class RootObject
{
public List btc\_usd { get; set; }
public List eth\_usd { get; set; }
}
```
Upvotes: 1 <issue_comment>username_2: It look like your class structure is not proper, suggest you to generate class with the help of Visual Studio like as below and try your code by using generated class
[](https://i.stack.imgur.com/xNpS2.png)
based on your json strucutre you need class design like this
```
public class Rootobject
{
// do List if you are expecting multiple element
public CurrencyDetail btc\_usd { get; set; }
public CurrencyDetail eth\_usd { get; set; }
public CurrencyDetail doge\_usd { get; set; }
public CurrencyDetail pac\_usd { get; set; }
public CurrencyDetail rdd\_usd { get; set; }
}
public class CurrencyDetail
{
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
my suggesting is if possible modify you json , which will contain currency type in currency detail than you class strucutre will be like
```
public class Rootobject
{
public List currencies { get; set; }
}
public class CurrencyDetail
{
//btc\_usd or eth\_usd or doge\_usd or pac\_usd
public string type { get; set; }
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
Upvotes: 0
|
2018/03/18
| 1,298 | 3,990 |
<issue_start>username_0: I'm building small SPA web app with react, react router and semantic. Problem appeared when I decited to move my routes from main .jsx to separate file. I wanted to create something similar to [this](https://alligator.io/react/react-router-map-to-routes/) and theoretically everything works well but components in routes are 'undefined'.
My project structure looks like this:
```
root
|_ src
|_ config
| |_ routes.js
| | ...
|_ app
|_ index.jsx
| ...
```
routes.js:
```
import { Main, Add } from 'views';
export const routes = [
{
component: Main,
path: '/',
}, {
component: Add,
path: '/add',
},
];
```
Index.jsx:
```
import React from 'react';
import ReactDOM from 'react-dom';
import { routes } from 'config';
import {
BrowserRouter,
Route,
Switch,
} from 'react-router-dom';
export class Index extends React.Component {
render() {
return (
{routes.map(props => )}
);
}
}
ReactDOM.render(, document.getElementById('app'));
```
When i put console.log(routes) into the render I got something like that:
```
[
{ component: undefined, path: "/" },
{ component: undefined, path: "/add" }
]
```
and I have no idea where the problem may be.<issue_comment>username_1: Why not just make one class?
```
class Program
{
static void Main(string[] args)
{
var array = "[{\"btc_usd\":[{\"high\":8550.0102,\"low\":7800,\"avg\":8175.0051,\"vol\":1615543.57397705,\"vol_cur\":197.54076079,\"last\":7850,\"buy\":7850.00000000,\"sell\":7879.00000000,\"updated\":1521383863}],\"eth_usd\":[{\"high\":622.93708559,\"low\":482,\"avg\":552.46854279,\"vol\":346598.40520112,\"vol_cur\":630.37075493,\"last\":488.27857735,\"buy\":489.77564548,\"sell\":492.11726255,\"updated\":1521383876}]}]";
List a = JsonConvert.DeserializeObject>(array);
}
}
public class RootCurrency
{
public double high { get; set; }
public int low { get; set; }
public double avg { get; set; }
public double vol { get; set; }
public double vol\_cur { get; set; }
public double last { get; set; }
public double buy { get; set; }
public double sell { get; set; }
public int updated { get; set; }
}
public class RootObject
{
public List btc\_usd { get; set; }
public List eth\_usd { get; set; }
}
```
Upvotes: 1 <issue_comment>username_2: It look like your class structure is not proper, suggest you to generate class with the help of Visual Studio like as below and try your code by using generated class
[](https://i.stack.imgur.com/xNpS2.png)
based on your json strucutre you need class design like this
```
public class Rootobject
{
// do List if you are expecting multiple element
public CurrencyDetail btc\_usd { get; set; }
public CurrencyDetail eth\_usd { get; set; }
public CurrencyDetail doge\_usd { get; set; }
public CurrencyDetail pac\_usd { get; set; }
public CurrencyDetail rdd\_usd { get; set; }
}
public class CurrencyDetail
{
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
my suggesting is if possible modify you json , which will contain currency type in currency detail than you class strucutre will be like
```
public class Rootobject
{
public List currencies { get; set; }
}
public class CurrencyDetail
{
//btc\_usd or eth\_usd or doge\_usd or pac\_usd
public string type { get; set; }
public float high { get; set; }
public int low { get; set; }
public float avg { get; set; }
public float vol { get; set; }
public float vol\_cur { get; set; }
public float last { get; set; }
public float buy { get; set; }
public float sell { get; set; }
public int updated { get; set; }
}
```
Upvotes: 0
|
2018/03/18
| 386 | 1,371 |
<issue_start>username_0: To play/pause an audio file on Web page I am currently using two different buttons one for playing the audio and the other for pausing the audio.
Here is the code I am using :
```
Play
Pause
```
Javascript :
```
var song=document.getElementById("ad");
function playSong()
{
song.play();
}
function pauseSong()
{
song.pause();
}
```
I want to use a single button that will play/Pause the audio, for example : on the first click the Audio will start playing and the second click will Pause the audio.
Is this possible to combine these two buttons so that I can handle the audio using a single button?<issue_comment>username_1: ```
var song=document.getElementById("ad");
function togglePlayPause()
{
song.paused? song.play() : song.pause();
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: i myself used this code for a school exercise
```
Java Script
function Play()
{
var myVideo = document.getElementById("Video1");
if(myVideo.paused)
{
myVideo.play();
//document.getElementById("play").innerHTML="Pause";
}
else
{
myVideo.pause();
//document.getElementById("Pause").innerHTML="Play";
}
}
Play/Pause
```
this is my complete code hope this helps
i think the problem why it doesnt work in your code is because it says
```
if (song.pause)
```
instead of
```
if (song.paused)
```
Upvotes: 1
|
2018/03/18
| 1,032 | 3,101 |
<issue_start>username_0: I am working on a project where I have to loop through large arrays (lists), accessing each item by index.
Usually this involves checking each element against a condition, and thereafter potentially updating its value.
I've noticed that this is extremely slow compared to, for example, doing a similar thing in C#. Here is a sample of simply looping through arrays and reassigning each value:
C#:
```
var a = new double[10000000];
var watch = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < a.Length; i++)
{
a[i] = 1.0;
}
watch.Stop();
var elapsedMs = watch.ElapsedMilliseconds;
//About 40ms
```
Python:
```
a = []
for i in range(0, 10000000):
a.append(0.0)
t1 = time.clock()
for i in range(0, 10000000):
a[i] = 1.0
t2 = time.clock()
totalTime = t2-t1
//About 900ms
```
The python code here seems to be over 20x slower.
I am relatively new to python so I cannot judge if this kind of performance is "normal",
or if I am doing something horribly wrong here. I am running Anaconda as my python environment, PyCharm is my IDE.
Note: I have tried using `nditer` on numpy arrays with no significant performance increase.
Many thanks in advance for any tips!
**UPDATE:**
I've just compared the following two approaches:
```
#timeit: 43ms
a = np.random.randn(1000,1000)
a[a < 0] = 100.0
#timeit: 1650ms
a = np.random.randn(1000,1000)
for x in np.nditer(a, op_flags=['readwrite']):
if (x < 0):
x[...] = 100.0
```
looks like the first (vectorized) approach is the way to go here...<issue_comment>username_1: If you are using `numpy` you should use the `numpy` array type and then take advantage of `numpy` functions and broadcasting:
If your specific need is to assign `1.0` to all elements, there is a specific function for that in `numpy`:
```
import numpy as np
a = np.ones(10_000_000)
```
For a somewhat more general approach, where you first define the array and then assign all the different elements:
```
import numpy as np
a = np.empty(10_000_000)
a[:] = 1.0 # This uses the broadcasting
```
By default, the `np.array` is of type `double` so all elements will be `double`s.
Also, as a side note, using `time` for performance measurement is not optimal, since that will use the "wall clock time" and can be heavily affected by other programs running in the computer. Consider looking into the `timeit` module.
Upvotes: 0 <issue_comment>username_2: Python has a bunch of great ways to iterate on structures. There is good chance, if you find yourself using what I call `C` style looping:
```
for i in range(0, 10000000):
a = [1.0] * 10000000
```
... you are likely doing it wrong.
In my quick testing this is 40x faster than the above:
```
a = [1.0] * 10000000
```
Even a basic list comprehension is 3x faster:
```
a = [1.0 for i in range(0, 10000000)]
```
And as mentioned in comments, [cython](http://cython.org/), [numba](https://numba.pydata.org/) and [numpy](https://docs.scipy.org/doc/numpy/index.html), can all be used to provide various speedups for this sort of work.
Upvotes: -1
|
2018/03/18
| 1,003 | 3,499 |
<issue_start>username_0: When I click on the Signup button I receive the following error: Array to string conversion.
I think that the error occurs when I call `fileInput()` method but i don't know how to solve it.
This is the partial code of the view
```
php $form = ActiveForm::begin(); ?
= $form-field($model, 'username')->textInput(['autofocus' => true]) ?>
= $form-field($model, '<PASSWORD>')->passwordInput() ?>
= $form-field($model, 'email') ?>
= $form-field($modelUpload, 'imageFile')->fileInput() ?>
= Html::submitButton('Signup', ['class' = 'btn', 'name' => 'signup-button']) ?>
php ActiveForm::end(); ?
```
While this is the code for the controller:
```
php
class SiteController extends Controller {
/**
* Signs user up.
*
* @return mixed
*/
public function actionSignup() {
$model = new SignupForm();
$modelUpload = new UploadForm();
if ($model-load(Yii::$app->request->post()) && $modelUpload->load(Yii::$app->request->post())) {
$modelUpload->imageFile = UploadedFile::getInstances($modelUpload, 'imageFile');
if ($user = $model->signup()) {
if (Yii::$app->getUser()->login($user) && $modelUpload->upload()) {
return $this->goHome();
}
}
}
return $this->render('signup', [
'model' => $model,
'modelUpload' => $modelUpload,
]);
}
}
```
This is the code of the model. It's the same of the official documentation.
```
php
class UploadForm extends Model {
/**
* @var UploadedFile
*/
public $imageFile;
public function rules() {
return [
[['imageFile'], 'file', 'skipOnEmpty' = false, 'extensions' => 'png, jpg'],
];
}
public function upload() {
if ($this->validate()) {
$this->imageFile->saveAs('uploads/' . $this->imageFile->baseName . '.' . $this->imageFile->extension);
return true;
} else {
return false;
}
}
}
?>
```
Errors:
[](https://i.stack.imgur.com/M4HYF.png)
[](https://i.stack.imgur.com/RnJGe.png)
[](https://i.stack.imgur.com/Rr62m.png)<issue_comment>username_1: `Instant Solution`
------------------
Change your line inside the `actionSignup()` from below
```
UploadedFile::getInstances($modelUpload, 'imageFile');
```
to
```
UploadedFile::getInstance($modelUpload, 'imageFile');
```
`Reason`
--------
It's only a single file you are uploading not multiple files so `getInstances()` should be `getInstance()`
`About`
-------
>
> `getInstance` : Returns an uploaded file for the given model
> attribute. The file should be uploaded using
> [[\yii\widgets\ActiveField::fileInput()]]
>
>
> `getInstances`: Returns all uploaded files for the given model
> attribute.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: if you want to upload **multiple** files - having`'maxFiles' > 1` in your model's `FileValidator` rules - change your **attribute name** from:
```
= $form-field($modelUpload, 'imageFile')->fileInput() ?>
```
to
```
= $form-field($modelUpload, 'imageFile[]')->fileInput() ?>
```
read this:
<https://www.yiiframework.com/doc/guide/2.0/en/input-file-upload#uploading-multiple-files>
Upvotes: 0
|
2018/03/18
| 489 | 1,757 |
<issue_start>username_0: So, I am getting a dynamic URL which I want to open in new tab.
PS: I am currently writing this in content script.
for example
```
var newURL = document.querySelectorAll("a")[lengthA].href
```
I want to open this in new tab. In javascript, We simple do window.open to open URL in new tab but it wasn't working, then I google the same and saw that in order to open URL in new tab, we need to do..
```
chrome.tabs.create({ url: newURL });
```
So I did the same i.e
```
var newURL = document.querySelectorAll("a")[lengthA].href
chrome.tabs.create({ url: newURL });
```
But even that doesn't work. What am I doing wrong here?<issue_comment>username_1: I made plenty of extensions with the intention of what you are doing and I used this:
```
function MyFunc() {
var win = window.open("https://www.google.com", '_blank');
win.focus;
}
```
if you add an event listener like this
```
document.addEventListener('DOMContentLoaded', function () {
document.getElementById("MyButton").addEventListener("mousedown", MyFunc);
});
```
It should work perfectly
Upvotes: 1 <issue_comment>username_2: Content script don't have access to all chrome API's so you would need to send a message to an event page.
Inside your content Script do like this to send a message to your event page
```
chrome.runtime.sendMessage({otab: "sendx", xname: newURL});
```
Now in your event page, you need to receive message to open this.
```
chrome.runtime.onMessage.addListener(function(request, sender, senderResponse){
if (request.otab == "sendx") {
chrome.tabs.create({url: request.xname });
}
})
```
refer this document <https://developer.chrome.com/extensions/content_scripts>
Upvotes: 1 [selected_answer]
|
2018/03/18
| 575 | 2,209 |
<issue_start>username_0: I'm invoking a function like below and understand this is why im getting this error. Is there a way to not invoke the function but still pass the event property?
```
onMouseOver={(event) => { this.moveBall(event) }}
```
The reason for wanting to do this is so I can do a check in the function like so:
```
const element = event.target ? event.target : event;
```
As I want to re-use this function to pass an element through on load:
```
// Below line is in a constructor.
this.navItem = document.querySelector('.navigation__item');
// Being called after my render
this.moveBall(this.props.navItem);
```
Feels like this should be doable..
I've managed to fix this with the below code but I believe that there must be a better way to achieve this:
```
window.addEventListener('load', () => {
const activeState = document.querySelector('.navigation__item .active')
this.moveBall(activeState)
});
```
\*\* Update \*\*
Full component code
<https://jsfiddle.net/fvn1pu5r/><issue_comment>username_1: Your moveBall function is being called with `undefined` as the argument at some stage. The `event.target ?` check then crashes with the error you gave.
The `onMouseOver` is likely always fine, as React supplies that.
Instead, I imagine it's the manual call you gave at the end. Is there a time when your `this.props.navItem` doesn't have a value?
Worth logging out that property right before calling the function to be sure it's always as you expect.
Upvotes: 0 <issue_comment>username_2: According to your last update all you need is just move first call to this.moveBall to react lifecycle hook componentDidMount. This ensures that DOM will have `.navigation_item` nodes in it. So, remove lines
```
window.addEventListener('load', () => {
const activeState = document.querySelector('.navigation__item .active')
this.moveBall(activeState)
});
```
from `render` method and add `componentDidMount` method to your class, like this:
```
componentDidMount() {
const activeState = document.querySelector('.navigation__item .active');
this.moveBall(activeState);
}
```
This should work.
Upvotes: 2 [selected_answer]
|
2018/03/18
| 901 | 2,635 |
<issue_start>username_0: I can't get connection chain with ssh one liner to work.
Chain:
My PC -> jumphost -> Bastion -> my app X host(sharing subnet with Bastion)
-Jumphost expect private key A
-Bastion and X host both expect private key B
```
my pc> ssh -i /path_to_priv_key_for_X/id_rsa -o StrictHostKeyChecking=no -o
"ProxyCommand ssh -p 22 -W %h:%p -o \"ProxyCommand ssh -p 24 -W %h:%p
-i /path_to_key_jump/id_rsa jumphostuser@jumphostdomain\" -i
/path_to_bastion_key/id_rsa bastionuser@ip_to_bastion" myappuser@subnet_ip
```
Above does not work, but
```
ssh -i /path_to_bastion_key/id_rsa -o "ProxyCommand ssh -p 24 -W
%h:%p -i /path_to_key_jump/id_rsa jumphostuser@jumphostdomain"
bastionuser@ip_to_bastion
```
works, so I can access bastion with one liner, but adding app x host in the command chain does not work, wonder why?
I can step by step manually access the myapp X host like this
```
mypc> ssh -p 24 -i path_to_key_jump/id_rsa jumphostuser@jumphostdomain
jumphost> ssh -i /path_to_bastion_key/id_rsa bastionuser@ip_to_bastion
bastion> ssh myappuser@subnet_ip
myapp>
```
How to make in command line two hops over two jump hosts both requiring different key without ssh config?<issue_comment>username_1: Something which is working for me surprisingly well is ssh with `-J` option:
>
>
> ```
> -J destination
> Connect to the target host by first making a ssh connection
> to the jump host described by destination and then establishing a TCP
> forwarding to the ultimate destination from there.
>
> ```
>
>
In fact, I's about its feature which I was not aware of for very long time:
>
>
> ```
> Multiple jump hops may be specified separated by comma characters.
>
> ```
>
>
So multi-hop like `PC -> jump server 1 -> jump server 2 -> target server` (in my example: `PC -> vpn -> vnc -> ece server` can be done with one combo:
```
$ ssh -J vpn,scs694@tr200vnc rms@tr001tbece11
```
Of course, most handy is to have ssh keys to open pwd-less connections (`PC->vpn` and `vpn -> vnc` and `vnc -> target`.
I hope it will help,
username_1
Upvotes: 2 <issue_comment>username_2: To add to the above. My use-case was a triple-hop to a database server, which looked like Server 1 (Basic Auth) --> Server 2 (Token) --> Server 3 (Basic Auth) --> DB Server (Port Forward).
After quite a few hours of turmoil, the solution was:
```
ssh -v -4 -J username@server1,username@server2 -N username@Server3 -L 1122:dbserver:{the_database_port_number}
```
Then I was able to just have the DB client hit localhost:1122 where 1122 can be any free port number on your localhost.
Upvotes: 2
|
2018/03/18
| 610 | 2,069 |
<issue_start>username_0: I am trying to create an Interface called `IService` and have an instance of it created in the constructor of my `MainViewModel`, similar to how we pass `IRegionManager` in prism, i.e like this:
```
public MainViewModel(IService service, IRegionManager regionManager)
{
_service = service;
_regionManager = regionManager;
}
```
However, when I go to run the code I get an error as follows: `InvalidOperationException: The current type, IService , is an interface and cannot be constructed. Are you missing a type mapping?`
I'm not sure what I am doing wrong as I've seen this created as intended before. Any ideas?<issue_comment>username_1: Something which is working for me surprisingly well is ssh with `-J` option:
>
>
> ```
> -J destination
> Connect to the target host by first making a ssh connection
> to the jump host described by destination and then establishing a TCP
> forwarding to the ultimate destination from there.
>
> ```
>
>
In fact, I's about its feature which I was not aware of for very long time:
>
>
> ```
> Multiple jump hops may be specified separated by comma characters.
>
> ```
>
>
So multi-hop like `PC -> jump server 1 -> jump server 2 -> target server` (in my example: `PC -> vpn -> vnc -> ece server` can be done with one combo:
```
$ ssh -J vpn,scs694@tr200vnc rms@tr001tbece11
```
Of course, most handy is to have ssh keys to open pwd-less connections (`PC->vpn` and `vpn -> vnc` and `vnc -> target`.
I hope it will help,
username_1
Upvotes: 2 <issue_comment>username_2: To add to the above. My use-case was a triple-hop to a database server, which looked like Server 1 (Basic Auth) --> Server 2 (Token) --> Server 3 (Basic Auth) --> DB Server (Port Forward).
After quite a few hours of turmoil, the solution was:
```
ssh -v -4 -J username@server1,username@server2 -N username@Server3 -L 1122:dbserver:{the_database_port_number}
```
Then I was able to just have the DB client hit localhost:1122 where 1122 can be any free port number on your localhost.
Upvotes: 2
|
2018/03/18
| 298 | 1,207 |
<issue_start>username_0: everybody, I am relatively new to SQL and I am currently testing my database tables using Oracle Live SQL. I have a table called Customer and a table called Contact. Within the Contact table, I am trying to add a FOREIGN KEY constraint of the Customer\_ID column into my Contact table, but keep getting an ORA-00904: "CUSTOMER\_ID": invalid identifier, error using the code below:
```
ALTER TABLE Contact ADD FOREIGN KEY (Customer_ID) REFERENCES Customer(Customer_ID)
```
Any help would be much appreciated.<issue_comment>username_1: Presumably, you don't have the column `Customer_Id` in `contact`. So try this:
```
ALTER TABLE Contact ADD Customer_Id number; -- the type is a guess
ALTER TABLE Contact ADD FOREIGN KEY (Customer_ID) REFERENCES Customer(Customer_ID);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: So based on the comments on your question, you don't have a Customer\_ID column on your Contact table. The definition of a foreign key is that you have the column you want to reference in both tables.
```
ALTER TABLE Contact ADD Customer_ID int;
ALTER TABLE Contact ADD FOREIGN KEY (Customer_ID) REFERENCES Customer(Customer_ID);
```
Upvotes: 1
|
2018/03/18
| 713 | 2,458 |
<issue_start>username_0: I have a react front-end with an express back end. The express just serves the static build files from the react side in production. I was having a problem with React routing working in production, as many have had, so I fixed it as so:
server.js:
```
app.get('/*', function (req, res) {
res.sendFile(path.join(__dirname, 'client', 'build', 'index.html'));
});
app.get('/api/customers', (req, res) => {
const customers = [
{id: 2, firstName: 'Test', lastName: 'Case'},
{id: 3, firstName: 'Foo', lastName: 'Bar'},
];
res.json(customers);
});
```
The '/\*' solve the routing problem in production, but now the fetch for the '/api/customers' does not work anymore.
customer.js:
```
componentDidMount() {
fetch('/api/customers')
.then(res => res.json())
.then(customers => this.setState({customers}, () => console.log('Customers fetched...', customers)))
.catch(() => console.log('Error'));
}
```
This fetch request logs 'Error' when ran. It seems as though the url for the api is changing somehow as a result of the '/\*' change in server.js, but I am not sure how I should change the fetch argument to make it work. The fetch was working using just the '/':
server.js:
```
app.get('/', function (req, res) {
res.sendFile(path.join(__dirname, 'client', 'build', 'index.html'));
});
```
However, this obviously stops react routing from working in production. What do I need to change my fetch argument to in order to make this work?<issue_comment>username_1: Change the order of the routes in `server.js`:
```
app.get('/api/customers', () => {});
app.get('/*', () => {});
```
The routes in express are [first come first served](https://stackoverflow.com/a/32604002/2957110), and as "/api/customers" matches "/\*", it will return your index.html if your list them the other way around.
Upvotes: 1 <issue_comment>username_2: thank you so much for this solutiion @username_1 !! The order DOES affect the routes. this is my working code
server.js
...ipmorts here ...
```
app.use('/users', require('./users/users.controller'));
app.use('/machineLearning', require('./machineLearning/machineLearning.controller'));
app.use('/public', require('./public/public.controller'));
//for Prod
app.use(express.static(path.join(__dirname,'../L10-ui/dist')));
app.get('/*', function (req, res) {
res.sendFile(path.join(__dirname, '../L10-ui/dist', 'index.html'));
});```
```
Upvotes: 0
|
2018/03/18
| 881 | 3,418 |
<issue_start>username_0: I know this is not an usual requirement, but is it possible to create a role based authorization system using React Navigation? If yes, is there a complementary tool to achieve that? Or can this be made using only React Navigation?<issue_comment>username_1: There are a lot of ways to do authorization rules using `react navigation` library.
Here are some good articles to follow:
* <https://medium.com/the-react-native-log/building-an-authentication-flow-with-react-navigation-fb5de2203b5c>
* <https://reactnavigation.org/docs/auth-flow.html>
As I use `redux-saga`, I like to use it to control authentication flow, because it's easy to handle in a more linear way, listening to `redux-persist` actions
I think does not exists a *right way* to do this, because it depends a lot according with yours needs, application flow and backend.
Upvotes: 0 <issue_comment>username_2: If you integrate `react-navigation` with `redux`, you will be able to intercept all the navigation actions (with `navigate/` prefix. Eg: `navigate/HOME`) in a redux middleware. You can write your own logic in the middleware to only let authorized actions to reach the reducer.
Follow this guide for integrating react-navigation to redux - <https://reactnavigation.org/docs/redux-integration.html>.
This video will help you with using middleware for this purpose - <https://www.youtube.com/watch?v=Gjiu7Lgdg3s>.
Upvotes: 0 <issue_comment>username_3: This is simple logic.
Here is my logic which worked perfectly...
```
RequireAuth.js
import React, { Component } from "react";
import { authedUser } from '../../../helper/helpers';
import { Lang } from '../../../helper/Lang';
import Login from "../../screens/form/login";
import {
Container,
Header,
Title,
Content,
Button,
Icon,
H1,
H2,
H3,
Text,
Left,
Right,
Body
} from "native-base";
import styles from "./styles";
const RequireAuth =(obj)=>{
const Component=obj.component;
return class App extends Component {
state = {
isAuthenticated: false,
isLoading: true
}
componentDidMount() {
authedUser({loaduserow:false,noCatch:true}).then((res) => {
if(res.loged){
this.setState({isAuthenticated: true, isLoading: false});}
else{
this.setState({isAuthenticated: false, isLoading: false});}
}).catch(() => {
this.setState({isLoading: false});
})
}
render() {
const { isAuthenticated, isLoading } = this.state;
if(isLoading) {
return(
### Authenticating...
)
}
if(!isAuthenticated) {
return
}
return
}
}}
export default RequireAuth;
```
src/App.js
```
....
import RequireAuth from "./screens/wall/RequireAuth";
...
const AppNavigator = StackNavigator(
{
Drawer: { screen: Drawer },
Login: { screen: Login },
About: { screen: About },
Profile: { screen: RequireAuth({component:Profile,name:'Profile'}) },
.....
```
`authedUser()` is just a simple promise which return `resolve({loged:true})` if authed while not auths it returns `resolve({loged:false})`
Upvotes: -1 <issue_comment>username_4: Check this below this might help you
```
https://jasonwatmore.com/post/2019/02/01/react-role-based-authorization-tutorial-with-example
```
Upvotes: -1
|
2018/03/18
| 833 | 3,419 |
<issue_start>username_0: I'm a bit confused on how to use closures or in my case a completion block effectively. In my case, I want to call a block of code when some set of asynchronous calls have completed, to let my caller know if there was an error or success, etc.
So an example of what I'm trying to accomplish might look like the following:
```
// Caller
updatePost(forUser: user) { (error, user) in
if let error = error {
print(error.description)
}
if let user = user {
print("User was successfully updated")
// Do something with the user...
}
}
public func updatePost(forUser user: User, completion: @escaping (Error?, User?) -> () {
// Not sure at what point, and where to call completion(error, user)
// so that my caller knows the user has finished updating
// Maybe this updates the user in the database
someAsyncCallA { (error)
}
// Maybe this deletes some old records in the database
someAsyncCallB { (error)
}
}
```
So ideally, I want my completion block to be called when async block B finishes (assuming async block A finished already, I know this is a BAD assumption). But what happens in the case that async block B finishes first and async block A takes a lot longer? If I call my completion after async block B, then my caller thinks that the method has finished.
In a case like this, say I want to tell the user when updating has finished, but I only really know it has finished when both async blocks have finished. How do I tackle this or am I just using closures wrong?<issue_comment>username_1: try below code:
```
public func updatePost(forUser user: User, completion: @escaping (Error?, User?) -> () {
// Not sure at what point, and where to call completion(error, user)
// so that my caller knows the user has finished updating
// Maybe this updates the user in the database
someAsyncCallA { (error)
// Maybe this deletes some old records in the database
someAsyncCallB { (error)
completion()
}
}
}
```
please try updated answer below:
```
public func updatePost(forUser user: User, completion: @escaping (Error?, User?) -> () {
var isTaskAFinished = false
var isTaskBFinished = false
// Not sure at what point, and where to call completion(error, user)
// so that my caller knows the user has finished updating
// Maybe this updates the user in the database
someAsyncCallA { (error)
// Maybe this deletes some old records in the database
isTaskAFinished = true
if isTaskBFinished{
completion()
}
}
someAsyncCallB { (error)
isTaskBFinished = true
if isTaskAFinished{
completion()
}
}
}
```
Upvotes: -1 <issue_comment>username_2: I don't know if your question has been answered. What I think you are looking for is a DispatchGroup.
```
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
someAsyncCallA(completion: {
dispatchGroup.leave()
})
dispatchGroup.enter()
someAsyncCallB(completion: {
dispatchGroup.leave()
})
dispatchGroup.notify(queue: .main, execute: {
// When you get here, both calls are done and you can do what you want.
})
```
Really important note: `enter()` and `leave()` calls **must balance**, otherwise you crash with an exception.
Upvotes: 1
|
2018/03/18
| 690 | 2,670 |
<issue_start>username_0: I am mining data from a website through Data Scraping in Python. I am using `request` package for sending the parameters.
Here is the code snippet in Python:
```
for param in paramList:
data = get_url_data(param)
def get_url_data(param):
post_data = get_post_data(param)
headers = {}
headers["Content-Type"] = "text/xml; charset=UTF-8"
headers["Content-Length"] = len(post_data)
headers["Connection"] = 'Keep-Alive'
headers["Cache-Control"] = 'no-cache'
page = requests.post(url, data=post_data, headers=headers, timeout=10)
data = parse_page(page.content)
return data
```
The variable `paramList` is a list of more than 1000 elements and the endpoint `url` remains the same. I was wondering if there is a better and more faster way to do this ?
Thanks<issue_comment>username_1: try below code:
```
public func updatePost(forUser user: User, completion: @escaping (Error?, User?) -> () {
// Not sure at what point, and where to call completion(error, user)
// so that my caller knows the user has finished updating
// Maybe this updates the user in the database
someAsyncCallA { (error)
// Maybe this deletes some old records in the database
someAsyncCallB { (error)
completion()
}
}
}
```
please try updated answer below:
```
public func updatePost(forUser user: User, completion: @escaping (Error?, User?) -> () {
var isTaskAFinished = false
var isTaskBFinished = false
// Not sure at what point, and where to call completion(error, user)
// so that my caller knows the user has finished updating
// Maybe this updates the user in the database
someAsyncCallA { (error)
// Maybe this deletes some old records in the database
isTaskAFinished = true
if isTaskBFinished{
completion()
}
}
someAsyncCallB { (error)
isTaskBFinished = true
if isTaskAFinished{
completion()
}
}
}
```
Upvotes: -1 <issue_comment>username_2: I don't know if your question has been answered. What I think you are looking for is a DispatchGroup.
```
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
someAsyncCallA(completion: {
dispatchGroup.leave()
})
dispatchGroup.enter()
someAsyncCallB(completion: {
dispatchGroup.leave()
})
dispatchGroup.notify(queue: .main, execute: {
// When you get here, both calls are done and you can do what you want.
})
```
Really important note: `enter()` and `leave()` calls **must balance**, otherwise you crash with an exception.
Upvotes: 1
|
2018/03/18
| 544 | 2,093 |
<issue_start>username_0: Here is a basic class I have set up to calculate earnings with stocks.
Focus on the "Stock" constructor here:
```
public class Stock{
private String symbol;
private int totalShares;
private double totalCost;
public void Stock(String symbol){
this.symbol = symbol;
totalShares = 0;
totalCost = 0.0;
}
public double getProfit(double currentPrice){
double marketValue = totalShares * currentPrice;
return marketValue - totalCost;
}
public void purchase(int shares, double pricePerShare){
totalShares += shares;
totalCost += shares*pricePerShare;
}
public int getTotalShares(){
return totalShares;
}
}
```
I have created a subclass called DividendStock, which is supposed to calculate dividend earnings.
```
public class DividendStock extends Stock{
private double dividends;
public DividendStock(String symbol){
super(symbol);
dividends = 0.0;
}
public void payDividend(double amountPerShare){
dividends += amountPerShare*getTotalShares();
}
}
```
The constructor of this class doesnt allow me to call the superclass's constuctor: super(symbol);
The error message is as goes: "constructor Stock in class Stock cannot be applied to given types;"
I have searched for a solution, but everything seems to be in place.
Any ideas of why it doesnt allow me to call this constructor?<issue_comment>username_1: A constructor does not have any return type. When you put the return type it will be a normal method.
```
public void Stock(String symbol) {
this.symbol = symbol;
totalShares = 0;
totalCost = 0.0;
}
```
Should be
```
public Stock(String symbol) {
this.symbol = symbol;
totalShares = 0;
totalCost = 0.0;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Replace it `public void Stock(String symbol){ this.symbol = symbol; totalShares = 0; totalCost = 0.0; }`
To `public Stock(String symbol){ this.symbol = symbol; totalShares = 0; totalCost = 0.0; }`
Upvotes: 1
|
2018/03/18
| 1,384 | 5,083 |
<issue_start>username_0: I am trying to create a subclass of javascript's Array. I want to initiate subclass with array-type argument and add a method to remove an element from the array (subclass).
My code looks like this:
```js
class CustomArray extends Array {
constructor(array) {
console.log('Initiating array:', array)
super(...array);
}
remove(element) {
let index = this.indexOf(element);
if (index > -1) {
return this.splice(index, 1);
}
return [];
}
}
var a = ['a', 'b', 'c', 'd', 'e'];
var list = new CustomArray(a)
console.log('list:', list);
console.log('remove:', list.remove('c'));
console.log('list:', list);
```
The problem is that when I call [.splice()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice) it removes element from array, but also returns array of deleted elements, actually it returns new instance of my subclass CustomArray, which should be initiated with argument of array type, but .splice() initiates it with arguments of integer type.
Here is an example of what I think happens when I call [.splice()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice):
Let's say we have instance `list` of `CustomArray` class initiated with argument `['a','b','c','d','e']` and then we call method `list.remove('c')`. (Just like in code snippet).
Method `remove` of `CustomArray` class checks the index of `c` is in array `['a','b','c','d','e']`, which is `2` and then calls `method this.splice(2,1)` which should remove 1 element at index 2 in array `['a','b','c','d','e']`. Method `splice` removes element `c` form array, but also returns something like `new CustomArray(1)` because one element was removed form array so it tries to create array of length 1, but that failes because class `CustomArray` is expectiong array.
I want to prevent `splice` method from initiating a new instance of `CustomArray` class and instead return normal array (an instance of Array object).
[Jsbin link.](https://jsbin.com/boqomuj/1/edit?js,console)<issue_comment>username_1: >
> I want to prevent `splice` method from initiating a new instance of `CustomArray` class and instead return normal array (an instance of `Array` object).
>
>
>
Then you need to create a different method with a different name. The semantics of `splice` are [clearly and precisely defined](https://tc39.github.io/ecma262/#sec-array.prototype.splice); they form a contract for the `Array` type. Having your `CustomArray` violate that contract would mean it isn't an `Array` anymore, it's something *array-like*, and shouldn't extend `Array`.
Since your method is called `remove`, that's fine; if you want `remove` to return `Array`, not `CustomArray`, you just need to implement the logic yourself:
```
remove(element) {
let index = this.indexOf(element);
if (index > -1) {
const newLength = this.length - 1;
while (index < newLength) {
this[index] = this[index + 1];
++index;
}
this.length = newLength;
return [element];
}
return [];
}
```
---
Alternately, of course, make `CustomArray`'s constructor work correctly when called by the various `Array.prototype` methods. (The one you have in the question works just fine, other than logging something you don't expect with `console.log`.)
Upvotes: 2 [selected_answer]<issue_comment>username_2: It is possible have `splice` return a standard array -- so without it calling your constructor. It is by changing the [`@@species` property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/@@species) of your custom class, which determines which constructor will be used. But be aware that this will not only affect `splice`, but also *all* other methods that would create a new instance, including `map`, `filter`, `slice`, ...
You can change the `@@species` property by overwriting the corresponding static getter:
```js
class CustomArray extends Array {
static get [Symbol.species]() { return Array; } // <-----------
constructor(array) {
console.log('Initiating array:', array)
super(...array);
}
remove(element) {
let index = this.indexOf(element);
if (index > -1) {
return this.splice(index, 1); // Now calls Array constructor, not CustomArray
}
return [];
}
}
var a = ['a', 'b', 'c', 'd', 'e'];
var list = new CustomArray(a)
console.log('list:', list);
console.log('remove:', list.remove('c'));
console.log('list:', list);
// Some examples of effects on other methods
console.log(list.map(x => x) instanceof CustomArray); // false
console.log(list.filter(x => 1) instanceof CustomArray); // false
console.log(list.slice() instanceof CustomArray); // false
console.log(list.concat() instanceof CustomArray); // false
// Other methods, that do not return a new instance, are not affected:
console.log(list.reverse() instanceof CustomArray); // true
console.log(list.sort() instanceof CustomArray); // true
```
Upvotes: 2
|
2018/03/18
| 593 | 1,882 |
<issue_start>username_0: It's really basic stuff but it gets on my nerves;
Why this code doesn't work properly?
```
for i in range(0, 101):
if i % 3 == 0 and i != 0:
print '{} Three'.format(i)
elif i % 5 == 0 and i % 3 == 0 and i != 0:
print '{} FiveThree'.format(i)
else:
print "{}".format(i)
```
Is it because of conditions? So if I want to write the code in this form I must write the complicated condition first and then the simple one?<issue_comment>username_1: Any number that is a multiple of both 3 and 5 will make the first `if` condition true and will never be checked against the second `if` condition.
You should flip the conditions:
```
for i in range(0, 101):
if i % 5 == 0 and i % 3 == 0 and i != 0:
print('{} FiveThree'.format(i))
elif i % 3 == 0 and i != 0:
print('{} Three'.format(i))
else:
print("{}".format(i))
```
But this is wasteful. It checks `i % 3 == 0 and i != 0` twice.
I'm leaving the optimization as an exercise.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your second condition is subset of first condition, so if will never go in that block. Change the order of conditions, it will work
```
for i in range(0, 101):
if i % 5 == 0 and i % 3 == 0 and i != 0:
print '{} FiveThree'.format(i)
elif i % 3 == 0 and i != 0:
print '{} Three'.format(i)
else:
print "{}".format(i)
```
Upvotes: 0 <issue_comment>username_3: The `if`, `elif`, `else` clauses are mutually exclusive. One option is to put the strongest conditions first as username_1 suggests. Another option is to have multiple levels of conditions.
```
for i in range(0, 101):
if i % 3 == 0 and i != 0:
if i % 5 == 0:
print '{} FiveThree'.format(i)
else:
print '{} Three'.format(i)
else:
print "{}".format(i)
```
Upvotes: 0
|
2018/03/18
| 1,018 | 2,801 |
<issue_start>username_0: **This** is for anyone if they are looking for a solution of such kind of a problem. I will try to explain the problem here and put the answer after that.
**Question**
I have `two csv` files (`file1.csv` and `file2.csv`): in which `cookieid` column is common in both. When I load it in dataframe it looks like this for ex:
```
file1.csv
col1 col2 col3 CookieID
a0 b1 c1 12
a1 b2 c2 13
a2 b3 c3 12
a1 b1 c1 145
a3 b4 c2 555
file2.csv
col4 col9 col55 CookieID colsales
Aba xxx yyy 12 567
bab bhh jjj 13 0
ccc kkk lll 222 67
Aba xxx yyy 1 6
ccc kkk jjj 666 90
```
**Task**: Need to make separate files for each `CookieID` with all their information in it from both the files. For ex:
I read from `file1.csv` and `file2.csv`. I select my first `cookieid` which is `12` and make a file `12.csv` (or any other suitable name) and inside it I put `a0 b1 c1 a2 b3 c3 Aba xxx yyy 567`. i.e values of rows where that `cookieid` is present from both the files.
**Solution** - Provided below for the community. It can be implemented if you want to find out the cookie journey from your dataset.<issue_comment>username_1: ```
import pandas as pd
import numpy as np
import string
DATA_FILE1 = 'file1.csv'
## encodings vary from file to file
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1')
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1')
#creating a list of unique cookieids from both the files
cookielist =[]
uniqcookie1 = df1["CookieID"].unique()
uniqcookie2 = df2["CookieID"].unique()
for i in uniqcookie1:
cookielist.append(i)
for j in uniqcookie2:
cookielist.append(j)
for i in cookielist:
dfout1 = df1.loc[df1['CookieID'] == i]
dfout2 = df2.loc[df2['CookieID'] == i]
bigdata = pd.concat([dfout1, dfout2], axis=1)
k = bigdata.loc[:, bigdata.columns != 'CookieID']
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ', mode='a')
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: I think you can also use solution with `set`s for write only common rows in both `DataFrame` by `CookieID` column:
```
DATA_FILE1 = 'file1.csv'
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1',index_col=['CookieID'])
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1', index_col=['CookieID'])
cookieset = set(df1.index).intersection(df2.index)
print (cookieset)
{12, 13}
for i in cookieset:
dfout1 = df1.loc[[i]].values.ravel()
dfout2 = df2.loc[[i]].values.ravel()
bigdata = np.concatenate([dfout1, dfout2])
print (bigdata)
k = pd.DataFrame([bigdata])
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ')
```
Upvotes: 1
|
2018/03/18
| 955 | 2,975 |
<issue_start>username_0: I'm new to dynamoDB and I'm trying to write some data to the table using Lambda.
So far I have this:
```
'AddFood': function () {
var FoodName = this.event.request.intent.slots.FoodName.value;
var FoodCalories = this.event.request.intent.slots.FoodCalories.value;
console.log('FoodName : ' + FoodName);
const params = {
TableName: 'Foods',
Item: {
'id': {"S": 3},
'calories': {"S": FoodCalories},
'food': {"S": FoodName}
}
};
writeDynamoItem(params, myResult=>{
var say = '';
say = myResult;
say = FoodName + ' with ' + FoodCalories + ' calories has been added ';
this.response.speak(say).listen('try again');
this.emit(':responseReady');
});
function writeDynamoItem(params, callback) {
var AWS = require('aws-sdk');
AWS.config.update({region: AWSregion});
var docClient = new AWS.DynamoDB();
console.log('writing item to DynamoDB table');
docClient.putItem(params, function (err, data) {
if (err) {
callback(err, null)
} else {
callback(null, data)
}
});
}
}
```
Does anyone know why the data is not appearing in the database?
I have checked the IAM and the policy is set to AmazonDynamoDBFullAccess.<issue_comment>username_1: ```
import pandas as pd
import numpy as np
import string
DATA_FILE1 = 'file1.csv'
## encodings vary from file to file
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1')
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1')
#creating a list of unique cookieids from both the files
cookielist =[]
uniqcookie1 = df1["CookieID"].unique()
uniqcookie2 = df2["CookieID"].unique()
for i in uniqcookie1:
cookielist.append(i)
for j in uniqcookie2:
cookielist.append(j)
for i in cookielist:
dfout1 = df1.loc[df1['CookieID'] == i]
dfout2 = df2.loc[df2['CookieID'] == i]
bigdata = pd.concat([dfout1, dfout2], axis=1)
k = bigdata.loc[:, bigdata.columns != 'CookieID']
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ', mode='a')
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: I think you can also use solution with `set`s for write only common rows in both `DataFrame` by `CookieID` column:
```
DATA_FILE1 = 'file1.csv'
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1',index_col=['CookieID'])
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1', index_col=['CookieID'])
cookieset = set(df1.index).intersection(df2.index)
print (cookieset)
{12, 13}
for i in cookieset:
dfout1 = df1.loc[[i]].values.ravel()
dfout2 = df2.loc[[i]].values.ravel()
bigdata = np.concatenate([dfout1, dfout2])
print (bigdata)
k = pd.DataFrame([bigdata])
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ')
```
Upvotes: 1
|
2018/03/18
| 854 | 2,314 |
<issue_start>username_0: Every time I try to launch my project I get this error:
`$ ng serve`
>
> Your global Angular CLI version (1.7.3) is greater than your local
> version (1.6.8). The local Angular CLI version is used.
>
>
> To disable this warning use "ng set --global
> warnings.versionMismatch=false".
>
>
> Cannot read property 'config' of null
>
>
> TypeError: Cannot read property 'config' of null
> at Class.run (D:\Projekte\Advanced-ban\webinterface\node\_modules\@angular\cli\tasks\serve.js:51:63)
> at check\_port\_1.checkPort.then.port (D:\Projekte\Advanced-an\webinterface\node\_modules\@angular\cli\commands\serve.js:123:26)
> at
> at process.\_tickCallback (internal/process/next\_tick.js:188:7)
>
>
><issue_comment>username_1: ```
import pandas as pd
import numpy as np
import string
DATA_FILE1 = 'file1.csv'
## encodings vary from file to file
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1')
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1')
#creating a list of unique cookieids from both the files
cookielist =[]
uniqcookie1 = df1["CookieID"].unique()
uniqcookie2 = df2["CookieID"].unique()
for i in uniqcookie1:
cookielist.append(i)
for j in uniqcookie2:
cookielist.append(j)
for i in cookielist:
dfout1 = df1.loc[df1['CookieID'] == i]
dfout2 = df2.loc[df2['CookieID'] == i]
bigdata = pd.concat([dfout1, dfout2], axis=1)
k = bigdata.loc[:, bigdata.columns != 'CookieID']
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ', mode='a')
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: I think you can also use solution with `set`s for write only common rows in both `DataFrame` by `CookieID` column:
```
DATA_FILE1 = 'file1.csv'
df1 = pd.read_csv(DATA_FILE1,sep=',',encoding='ISO-8859-1',index_col=['CookieID'])
DATA_FILE2 = 'file2.csv'
df2 = pd.read_csv(DATA_FILE2,sep=',',encoding='ISO-8859-1', index_col=['CookieID'])
cookieset = set(df1.index).intersection(df2.index)
print (cookieset)
{12, 13}
for i in cookieset:
dfout1 = df1.loc[[i]].values.ravel()
dfout2 = df2.loc[[i]].values.ravel()
bigdata = np.concatenate([dfout1, dfout2])
print (bigdata)
k = pd.DataFrame([bigdata])
k.to_csv(str(i)+".txt",header=None, index=None, sep=' ')
```
Upvotes: 1
|
2018/03/18
| 1,026 | 4,397 |
<issue_start>username_0: We are working on a system that is supposed to 'run' jobs on distributed systems.
When jobs are accepted they need to go through a pipeline before they can be executed on the end system.
We've decided to go with a micro-services architecture but there one thing that bothers me and i'm not sure what would be the best practice.
When a job is accepted it will first be persisted into a database, then - each micro-service in the pipeline will do some additional work to prepare the job for execution.
I want the persisted data to be updated on each such station in the pipeline to reflect the actual state of the job, or the its status in the pipeline.
In addition, while a job is being executed on the end system - its status should also get updated.
What would be the best practice in sense of updating the database (job's status) in each station:
1. Each such station (micro-service) in the pipeline accesses the database directly and updates the job's status
2. There is another micro-service that exposes the data (REST) and serves as DAL, each micro-service in the pipeline updates the job's status through this service
3. Other?....
Help/advise would be highly appreciated.
Thanx a lot!!<issue_comment>username_1: Multiple microservices accessing the database is not recommended. Here you have the case where each of the service needs to be triggered, then they update the data and then some how call the next service.
You really need a mechanism to orchestrate the services. A workflow engine might fit the bill.
I would however suggest an event driven system. I might be going beyond with a limited knowledge of the data that you have. Have one service that gives you basic crud on data and other services that have logic to change the data (I would at this point would like to ask why you want different services to change the state, if its a biz req, its fine) Once you get the data written just create an event to which services can subscribe and react to it.
This will allow you to easily add more states to your pipeline in future.
You will need a service to manage the event queue.
As far as logging the state of the event was concerned it can be done easily by logging the events.
If you opt for workflow route you may use Amazon SWF or Camunda or really there quite a few options out there.
If going for the event route you need to look into event driven system in mciroservies.
Upvotes: 0 <issue_comment>username_2: accessing some shared database between microservices is highly not recommended as this will violate the basic rule of microservices architecture.
microservice must be autonomous and keep it own logic and data
also to achive a good microservice design you should losely couple your microservices
Upvotes: 1 [selected_answer]<issue_comment>username_3: To add to what was said by @username_1 and @<NAME>
I'd consider writing the state from the units of work in your pipeline to a view (table/cache(insert only)), you can use messaging or simply insert a row into that view and have the readers of the state pick up the correct state based on some logic (date or state or a composite key). as this view is not really owned by any domain service it can be available to any readers (read-only) to consume...
Upvotes: 1 <issue_comment>username_4: Consider also SAGA Pattern
>
> A Saga is a sequence of local transactions where each transaction updates data within a single service. The first transaction is initiated by an external request corresponding to the system operation, and then each subsequent step is triggered by the completion of the previous one.
>
>
>
<http://microservices.io/patterns/data/saga.html>
<https://dzone.com/articles/saga-pattern-how-to-implement-business-transaction>
<https://medium.com/@tomasz_96685/saga-pattern-and-microservices-architecture-d4b46071afcf>
Upvotes: 1 <issue_comment>username_5: If you would like to code the workflow:
Micorservice A which accepts the Job and command for update the job
Micorservice B which provide read model for the Job
Based on JobCreatedEvents use some messaging queue and process and update the job through queue pipelines and keep updating JobStatus through every node in pipeline.
I am assuming you know things about queues and consumers.
Myself new to Camunda(workflow engine), that might be used not completely sure
Upvotes: 1
|
2018/03/18
| 1,565 | 5,570 |
<issue_start>username_0: I've learned that there are 4 kinds of types in method reference. But I don't understand the difference between "***Reference to a static method***" and "***Reference to an instance method of an arbitrary object of a particular type***".
For example:
```
List weeks = new ArrayList<>();
weeks.add("Monday");
weeks.add("Tuesday");
weeks.add("Wednesday");
weeks.add("Thursday");
weeks.add("Friday");
weeks.add("Saturday");
weeks.add("Sunday");
weeks.stream().map(String::toUpperCase).forEach(System.out::println);
```
The method `toUpperCase` is **not** a `static` method... so why can one write in the way above, rather than needing to use it this way:
```
weeks.stream().map(s -> s.toUpperCase()).forEach(System.out::println);
```<issue_comment>username_1: I highly recommend you to read the Oracle's article about method references: <https://docs.oracle.com/javase/tutorial/java/javaOO/methodreferences.html>
That is the form of a lambda expression:
```
s->s.toUpperCase()
```
And that is a method reference:
```
String::toUpperCase
```
Semantically, the method reference is the same as the lambda expression, it just has different syntax.
Upvotes: 0 <issue_comment>username_2: Method reference quite intelligent feature in Java. So, when you use non-static method reference like `String:toUpperCase` Java automatically comes to know that it needs to call `toUpperCase` on the on the first parameter.Suppose there is two parameter a lambda expression expect then the method will `call` on the `first parameter` and the `second parameter` will pass as an `argument` of the method. Let' take an example.
```
List empNames = Arrays.asList("Tom","Bob");
String s1 = empNames.stream().reduce("",String::concat); //line -1
String s2 = empNames.stream().reduce("",(a,b)->a.concat(b)); // line -2
System.out.println(s1);
System.out.println(s2);
```
So, on above example on line -1, String#concat method will `call` on the first parameter (that is `a` line-2) and a second parameter (that `b` for line -2) will pass as the `argument`.
It is possible for the multiple arguments (more than 2) method also but you need to very careful about the which sequence of the parameters.
Upvotes: 1 <issue_comment>username_3: Explanation
-----------
>
> The method `toUpperCase` is **not** a static method... so why can one write in the way above, rather than needing to use it this way:
>
>
>
> ```
> weeks.stream().map(s -> s.toUpperCase()).forEach(System.out::println);
>
> ```
>
>
Method references **are not limited** to `static` methods. Take a look at
```
.map(String::toUpperCase)
```
it is equivalent to
```
.map(s -> s.toUpperCase())
```
Java will just call the method you have referenced on the elements in the stream. In fact, this is the whole point of references.
The official [Oracle tutorial](https://docs.oracle.com/javase/tutorial/java/javaOO/methodreferences.html) explains this in more detail.
---
Insights, Examples
------------------
The method `Stream#map` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/stream/Stream.html#map-java.util.function.Function-)) has the following signature:
```
Stream map(Function super T, ? extends R mapper)
```
So it expects some `Function`. In your case this is a `Function` which takes a `String`, applies some method on it and then returns a `String`.
Now we take a look at `Function` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/function/Function.html)). It has the following method:
>
> `R apply(T t)`
>
>
> Applies this function to the given argument.
>
>
>
This is exactly what you are providing with your method reference. You provide a `Function` that applies the given method reference on all objects. Your `apply` would look like:
```
String apply(String t) {
return t.toUpperCase();
}
```
And the Lambda expression
```
.map(s -> s.toUpperCase())
```
generates the **exact same** `Function` with the same `apply` method.
So what you could do is
```
Function toUpper1 = String::toUpperCase;
Function toUpper2 = s -> s.toUpperCase();
System.out.println(toUpper1.apply("test"));
System.out.println(toUpper2.apply("test"));
```
And they will both output `"TEST"`, they behave the same.
More details on this can be found in the Java Language Specification [JLS§15.13](https://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.13). Especially take a look at the examples in the end of the chapter.
Another note, why does Java even know that `String::toUpperCase` should be interpreted as `Function`? Well, in general it does not. That's why we always need to clearly specify the type:
```
// The left side of the statement makes it clear to the compiler
Function toUpper1 = String::toUpperCase;
// The signature of the 'map' method makes it clear to the compiler
.map(String::toUpperCase)
```
Also note that we can only do such stuff with **functional interfaces**:
```
@FunctionalInterface
public interface Function { ... }
```
---
Note on `System.out::println`
-----------------------------
For some reason you are not confused by
```
.forEach(System.out::println);
```
This method **is not** `static` either.
The `out` is an ordinary object instance and the `println` is a non `static` method of the `PrintStream` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/io/PrintStream.html)) class. See [System#out](https://docs.oracle.com/javase/9/docs/api/java/lang/System.html#out) for the objects documentation.
Upvotes: 3 [selected_answer]
|
2018/03/18
| 1,193 | 2,917 |
<issue_start>username_0: Suppose I have the following data frame in Spark Scala:
```
+--------+--------------------+--------------------+
|Index | Date| Date_x|
+--------+--------------------+--------------------+
| 1|2018-01-31T20:33:...|2018-01-31T21:18:...|
| 1|2018-01-31T20:35:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:04:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:05:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:15:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:16:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:19:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:20:...|2018-01-31T21:18:...|
| 2|2018-01-31T19:43:...|2018-01-31T20:35:...|
| 2|2018-01-31T19:44:...|2018-01-31T20:35:...|
| 2|2018-01-31T20:36:...|2018-01-31T20:35:...|
+--------+--------------------+--------------------+
```
I want to remove the rows where `Date < Date_x` for each Index, as illustrated below:
```
+--------+--------------------+--------------------+
|Index | Date| Date_x|
+--------+--------------------+--------------------+
| 1|2018-01-31T21:19:...|2018-01-31T21:18:...|
| 1|2018-01-31T21:20:...|2018-01-31T21:18:...|
| 2|2018-01-31T20:36:...|2018-01-31T20:35:...|
+--------+--------------------+--------------------+
```
I tried adding a column `x_idx` by using `monotonically_increasing_id()` and getting `min(x_idx)` for each `Index` where `Date < Date_x`. So that I can subsequently drop the rows from a data frame that don't satisfy the condition. But it doesn't seem to work for me. I probably miss the understanding of how `agg()` works. Thank you for your help!
```
val test_df = df.withColumn("x_idx", monotonically_increasing_id())
val newIdx = test_df
.filter($"Date" > "Date_x")
.groupBy($"Index")
.agg(min($"x_idx"))
.toDF("n_Index", "min_x_idx")
newIdx.show
+-------+--------+
|n_Index|min_x_idx|
+-------+--------+
+-------+--------+
```<issue_comment>username_1: You forgot to add `$` in
```
.filter($"Date" > "Date_x")
```
so the correct `filter` is
```
.filter($"Date" > $"Date_x")
```
You can use `alias` instead of calling `toDF` as
```
val newIdx = test_df
.filter($"Date" > $"Date_x")
.groupBy($"Index".as("n_Index"))
.agg(min($"x_idx").as("min_x_idx"))
```
You should be getting output as
```
+-------+---------+
|n_Index|min_x_idx|
+-------+---------+
|1 |6 |
|2 |10 |
+-------+---------+
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The filter condition might filtering all the records. Please check that print the dataframe after filtering records and make sure your filter works as you expected.
```
val newIdx = test_df
.filter($"Date" > $"Date_x")
.show
```
Upvotes: 0
|
2018/03/18
| 1,316 | 5,099 |
<issue_start>username_0: my issue i am having is that selenium is saying that the **next arrow button** is enabled when it is disabled/grayed out. what i am trying to do is this
1 click next arrow button
2 sleep for 5 seconds
3 check if disabled
4 click next arrow button
5 check if disabled
( loop repeat steps 1 -5)
if button is disabled break do while loop
my code that is not working is below
```
PS_OBJ_CycleData.Nextbtn(driver).click();
Thread.sleep(5000);
WebElement element = driver.findElement(By.id("changeStartWeekGrid_next"));
if (element.isEnabled()) {
System.out.println("Good next arrow enabled");
} else {
System.out.println("next arrow disabled");
PS_OBJ_CycleData.Cancelbtn(driver).click();
break dowhileloop;
}
```
my console output is "Good next arrow enabled" instead of going to the else statment.
Button HTML is here
```
```
As you can see the button is actually disabled there another way to check is button is really disabled? Any help would be appreciated.
this is an additional picture of the inspected element
[](https://i.stack.imgur.com/piP1h.png)<issue_comment>username_1: [The documentation for isEnabled](https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/remote/RemoteWebElement.html#isEnabled--).
Sadly, using the isEnabled method doesn't work in this case, as stated by the documentation:
>
> This will generally return true for everything but disabled **input** elements.
>
>
>
A proper alternative is using JavaScript to check for the attribute's existence, and its value. You can inject JavaScript through the [executeScript](https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/remote/RemoteWebDriver.html#executeScript-java.lang.String-java.lang.Object...-) method of the webdriver classes. The first argument is the script, all following arguments are passed to the script, accessible as arguments[i], ...
For example:
```
Boolean disabled = driver.executeScript("return arguments[0].hasAttribute(\"disabled\");", element);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I In this case since i did not have an actual button I needed to find it attribute to see if it was disabled or not.
```
PS_OBJ_CycleData.Nextbtn(driver).click();
Thread.sleep(4000);
// check is arrow button is disabled
if (driver.findElement(By.id("changeStartWeekGrid_next")).getAttribute("disabled") != null) {
PS_OBJ_CycleData.Cancelbtn(driver).click();
break dowhileloop;
}
```
Upvotes: 0 <issue_comment>username_3: You can check it with this simple code:
```
Boolean isbutton;
isbutton=button1.isEnable()
```
Upvotes: 0 <issue_comment>username_4: Make sure you have the correct element. I've wasted hours trying to figure out why an element was enabled when it shouldn't have been, when I was actually looking at the wrong one! Inspecting the element in the browser did not help, because it wasn't the same element that the java code was looking at. The following code turned out to be helpful:
```
System.out.println("Actual element=" + describeElement(yourElement));
public static String describeElement(WebElement element) {
String result = "";
if (element == null ) {
log.error("Could not describe null Element");
return "null";
}
// Look for common attributes, such as id, name, value, title, placeholder, type, href, target, role, class,
String id = element.getAttribute("id");
String name = element.getAttribute("name");
String value = element.getAttribute("value");
String title = element.getAttribute("title");
String placeholder = element.getAttribute("placeholder");
String type = element.getAttribute("type");
String href = element.getAttribute("href");
String target = element.getAttribute("target");
String role = element.getAttribute("role");
String thisClass = element.getAttribute("class");
result = "WebElement [tag:" + element.getTagName() + " text:'" + limit(element.getText()) + "' id:'" + id + "' " +
(StringUtils.isEmpty(name) ? "" : (" name:'" + name + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" value:'" + value + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" title:'" + title + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" placeholder:'" + placeholder + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" type:'" + type + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" href:'" + href + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" target:'" + target + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" name:'" + name + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" role:'" + role + "' ")) +
(StringUtils.isEmpty(name) ? "" : (" class:'" + thisClass + "' ")) +
" isDisplayed: " + element.isDisplayed() +
" isEnabled: " + element.isEnabled() +
" isSelected: " + element.isSelected() + "]";
return result;
}
```
Upvotes: 0
|
2018/03/18
| 741 | 2,723 |
<issue_start>username_0: I am getting the
>
> Sorry, the page you are looking for could not be found.
>
>
>
On Laravel 5.5. I am sure I'm missing something very small. But not sure what it is because I'm a laravel learner. Please see below:
ROUTE
```
Auth::routes();
Route::get('/curriculum-sections','CurriculumsectionsController@index')->name('curriculum-sections');
Route::resource('/curriculum-sections','CurriculumsectionsController');
```
CONTROLLER
```
public function show(Curriculumsection $curriculumsection)
{
//
$curriculum = Curriculum::findOrFail($curriculumsection->id);
return view('curriculum-sections.show', ['curriculum'=>$curriculum]);
}
```
and I also made sure that the page exists in the views folder. While troubleshooting I also did `php artisan route:list` and this is what I got[](https://i.stack.imgur.com/9jGPx.jpg)
Edit:
I am accessing the error from:
>
> <http://localhost:8000/curriculum-sections/1>
>
>
><issue_comment>username_1: I dont think that you need the first get route. You just define the Route::resource(...) and that's it. Laravel handles the different requests.
[See here -> Resource Controllers](https://laravel.com/docs/5.5/controllers#resource-controllers)
Upvotes: 0 <issue_comment>username_2: The problem is that your route defines `id` for model `Curriculumsection`. You're using model binding, which will automatically query for `Curriculumsection::findOrFail(route_id)` (in your case route\_id is 1). And then you're using the same `id` to query model `Curriculum` as well, with `->findOrFail(route_id)`.
So in order for this route to return anything other than 404, you have to have a record of `Curriculumsection` with id 1 and a record of `Curriculum` with id 1 in your database.
I'm not sure how your database is set up, or how these 2 models are related to each other, but definitely not by same `id` (otherwise, why not have all data in the same table).
Something like this would make more sense (binding the `Curriculum` model directly):
```
public function show(Curriculum $curriculum)
{
return view('curriculum-sections.show', ['curriculum'=>$curriculum]);
}
```
This would bind the `Curriculum` model to the route and automatically fetch the one with passed in `id`.
Or something like this for your use case, but it assumes that you have a working relationship called `curriculum()` on your `Curriculumsection` model:
```
public function show(Curriculumsection $curriculumsection)
{
$curriculum = $curriculumsection->curriculum;
return view('curriculum-sections.show', ['curriculum'=>$curriculum]);
}
```
Upvotes: 2 [selected_answer]
|
2018/03/18
| 367 | 1,304 |
<issue_start>username_0: I am trying to learn C++. Currently I ran into a tutorial, which mentioned how to create a constant sized vector like this: `vector v(10);` Now I'm wondering how to create a constant sized vector of constant sized vectors, something like: `vector (10)> v(10);` This code doesn't work, so I wanted to ask is there a way do something like this and if there is, how?<issue_comment>username_1: You can't have a *constant sized vector of constant sized vectors* without writing your own class wrapper of some kind. Use the more appropriate [std::array](http://en.cppreference.com/w/cpp/container/array) container for the task:
```
std::array, 10> arr;
```
Upvotes: 2 <issue_comment>username_2: You could
```
vector> v(10, vector(10));
```
i.e. construct the [`std::vector`](http://en.cppreference.com/w/cpp/container/vector/vector) with 10 copies of elements with value `std::vector(10)`.
Note that for `std::vector` the size is not constant, it might be changed when element(s) are inserted or erased, 10 is just the initial size when `v` and its elements get initialized. On the other hand, the size of [`std::array`](http://en.cppreference.com/w/cpp/container/array) is constant, it's specified at compile-time and can't be changed at run-time.
Upvotes: 3 [selected_answer]
|
2018/03/18
| 754 | 2,831 |
<issue_start>username_0: How do I do rake style commands in my test file (Jest) with sequelize seeder files?
I'm trying to do the same thing as this, but with sequelize.
```
describe('routes : movies', () => {
beforeEach(() => {
return knex.migrate.rollback()
.then(() => { return knex.migrate.latest(); })
.then(() => { return knex.seed.run(); });
});
afterEach(() => {
return knex.migrate.rollback();
});
});
```<issue_comment>username_1: I think you **shouldn't make real** **DB** **requests** while testing your code. **Mock your DB request** and return the data set from your mock function if it's needed. Otherwise, it looks like you testing a library, in your case this lib is **knex**.
Read for more details regarding mocks <https://jestjs.io/docs/en/mock-functions>
Upvotes: -1 <issue_comment>username_2: I know I'm late to answer this but I just spent the past week trying to solve this issue. I have been able to successfully do this using Sequelize in conjunction with their sister project, Umzug. You will have to read the documentation for your specific issue but I can copy my test file so you can get an idea of how I did it. I'm happy to help someone if they still struggle with it after looking at the files.
```
// account.test.js
const models = require('../models/index.js');
const migrations = require("../index");
beforeAll(async () => {
await migrations.up().then(function() {
console.log("Migration and seeding completed")
});
});
afterAll( async () => {
await migrations.down().then(function() {
console.log("Migrations and down seeding completed");
})
const users = await models.User.findAll();
expect(users).toStrictEqual([]);
});
describe("Integration Test", () => {
it("Account integration test", async () => {
const data = { userId: 1210}
const users = await models.User.findAll();
console.log("All users:", JSON.stringify(users, null, 2));
expect(users[0].firstName).toBe('John');
expect(data).toHaveProperty('userId');
});
});
```
My index.js file
```
// index.js
const config = require('/config/config.json');
const { Sequelize } = require('sequelize');
const { Umzug, SequelizeStorage } = require('umzug');
const sequelize = new Sequelize(config);
const umzugMigration = new Umzug({
migrations: { glob: 'migrations/*.js' },
context: sequelize.getQueryInterface(),
storage: new SequelizeStorage({ sequelize }),
logger: console,
});
const umzugSeeding = new Umzug({
migrations: { glob: 'seeders/*.js' },
context: sequelize.getQueryInterface(),
storage: new SequelizeStorage({ sequelize }),
logger: console,
});
module.exports.up = () => umzugMigration.up().then(() => umzugSeeding.up());
module.exports.down = () => umzugSeeding.down();
```
Upvotes: 1
|
2018/03/18
| 673 | 2,564 |
<issue_start>username_0: I've created an account in Firebase using phone authentication. However, from the documentation, it mention that:
>
> If you use phone number based sign-in in your app, you should offer it
> alongside more secure sign-in methods, and inform users of the
> security tradeoffs of using phone number sign-in
>
>
>
I couldn't find a field to inject the password into the users database.

Should I enable the password/email sign in method? Is there any documentation to refer to?

I added email and password using:
```
createUserWithEmail:email:password:completion:
```
2 accounts are created:

I should rephrase my question to:
**If the user logout, when they sign in again should they use the phone number, or email and password?**<issue_comment>username_1: This is what it says in the documentation:
>
> Authentication using only a phone number, while convenient, is less secure than the other available methods, **because possession of a phone number can be easily transferred between users**. Also, on devices with multiple user profiles, any user that can receive SMS messages can sign in to an account using the device's phone number.
>
>
> If you use phone number based sign-in in your app, you should offer it alongside more secure sign-in methods, and inform users of the security tradeoffs of using phone number sign-in.
>
>
>
So all it means is that it is better to use another method with it, like email/password method.
When you enable that, then the user can create an account using his email, and you do not need the password, only the user id after he creates an account.
more info here:
<https://firebase.google.com/docs/auth/ios/password-auth>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Base on @username_1 answer:
Updated the code to link the phone authenticated user and email/password authentication method.
```
FIRAuthCredential *credential =
[FIREmailAuthProvider credentialWithEmail:userEmail
password:<PASSWORD>];
[[FIRAuth auth]
.currentUser linkWithCredential:credential
completion:^(FIRUser *_Nullable user, NSError *_Nullable error) {
// ...
FIRUser *tmpUser = user;
}];
```
You should see these in the console (with only one row with 2 authentication type instead of 2 rows) :
[](https://i.stack.imgur.com/BtaS4.png)
Upvotes: 0
|
2018/03/18
| 277 | 911 |
<issue_start>username_0: I have a custom dropdown with some text options, and I need to set a fixed height. Seems simple enough, I added `max-height` and `overflow-y: scroll`. It works great! but on Chrome on device mode, with big resolutions it blurs the text.
I made a pen that shows this. <https://codepen.io/a40637francisco/pen/OvbNyB>
It seems these 2 properties don't go well with chrome.
```css
max-height: 100px;
overflow-y: scroll;
```
I've tried several *fixes* using translate, perspective, smoothing, but no luck. Any ideas?<issue_comment>username_1: I guess it has to do with the mobile screen resizing. Setting it to 100% fixed it for me. Hope it helped!
Upvotes: 3 <issue_comment>username_2: I found a solution using also height
```css
height: 100%
max-height: 100px;
overflow-y: scroll;
```
Upvotes: 0 <issue_comment>username_3: Especially for me helped removing border-radius
Upvotes: 0
|
2018/03/18
| 149 | 556 |
<issue_start>username_0: I looked into go list but couldn't find a flag that does this. <https://golang.org/pkg/math/> has a Round function but it's not found in my local install.<issue_comment>username_1: golang.org always documents the latest released version of Go. See <https://golang.org/doc/devel/release.html> for the release notes. [math.Round was introduced in 1.10](https://golang.org/doc/go1.10#math).
Upvotes: 3 [selected_answer]<issue_comment>username_2: You should update the go package for your platform to the latest available.
Upvotes: -1
|
2018/03/18
| 878 | 3,763 |
<issue_start>username_0: I have been using Jmeter for performance testing my web application. I have recorded the jmeter script by excluding js,css and other static content files.
While running the script, Jmeter doesnt execute javascript files so ajax XHR request are not sent. To overcome this i have recorded the script with js, css and other static content and it recorded all the Ajax xhr request too. But the performance results seems to be different from the browser loading time. Also i need to use cache during my performance testing.
Below is how my test plan will look like,
1. Included Retrieve all embedded resources in HTTP Request manager.
2. Concurrent pool size is 6
3. I have added HTTP cookie manager and Cache manager.
4. I have added a loop controller (This is for caching, jmeter will cache the files on first iteration and it will use the cached files
after that)
The problem i am facing is that the time taken for rest call are double the time shown in the browser console for single user. I have tried all other combinations but always i am getting higher time than the browser console.
I have tried to use the Selenium webdriver plugin to simulate the browser behavior but it doesnt seems to be using the cache. (<https://www.blazemeter.com/blog/how-load-test-ajaxxhr-enabled-sites-jmeter>)
Is there any other way to solve this problem? **I want to take the metrics with cache so kindly suggest me any solution that must include cache.** Or is there any other tool similar to jmeter that could solve this issue. My goal is to take web page load time with cache for 'n' number of users.
PS : I am even interested to write any scripts in jmeter but the scripts should not overload the performance of jmeter.
Thanks in advance.<issue_comment>username_1: Jmeter and caching are unrelated - everything that happens after data went over the wire is out of scope by design. You should only ever simulate requests that you expect NOT to be cached. So this is the feasible part: drop all requests that you expect the browser to cache from the Jmeter script (or move them outside the loop). On the load time of XHR: the browser will most certainly use HTTP keepAlive. Result is that all requests except the very first skip the setup and teardown phase of TCP sockets and are much faster - esp. when the request itself is small and quick. You can simulate this in JMeter also by checking the KeepAlive option *AND* selecting http commons as implementation. You can read up on this in the docs here: <http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Request>
Upvotes: 1 <issue_comment>username_2: 1. You should not be recording calls to embedded resources (images, scripts, styles, fonts, etc.) as if you record them - they will be executed **sequentially** while real browsers do this in parallel. So remove recorded requests for the embedded resources and "tell" JMeter to download them (and do it in parallel) using [HTTP Request Defaults](http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Request_Defaults)
2. You should be recording AJAX requests, however real browsers execute them in parallel while JMeter runs them sequentially. In order to make JMeter's behavior closer to real browser you need to put these AJAX calls under the [Parallel Controller](https://www.blazemeter.com/blog/how-to-use-the-parallel-controller-in-jmeter)
You can install the [Parallel Controller](https://github.com/Blazemeter/jmeter-bzm-plugins/blob/master/parallel/Parallel.md) extension using [JMeter Plugins Manager](https://jmeter-plugins.org/wiki/PluginsManager/)
[](https://i.stack.imgur.com/KnoSY.png)
Upvotes: 3 [selected_answer]
|
2018/03/18
| 930 | 3,414 |
<issue_start>username_0: I'm writing a simple program which receives integer inputs from a Scanner object, determines whether it's a palindrome or not, and returns the boolean value.
For most numbers, it works well. However, at this code snippet:
```
private static void programRunner() {
System.out.print("Insert your number:");
Scanner in = new Scanner(System.in);
if (in.hasNextInt()) {
int testNumber = in.nextInt();
boolean result = palindromeTester(testNumber);
System.out.println(result);
programRunner();
} else {
System.exit(0);
}
}
```
I added the "System.exit(0)" expression to make users easily terminate the program by intentionally typing any non-integer value. The problem is that when "considerably large" integers, such as "1234567654321," are provided, the code launches System.exit(0) which means it's not recognized as an integer?
I believe it's the problem lies in the "default radii" of the hasNextInt method, which probably limits the size of integer values it recognizes. (The program runs fine up to 9-digit integers) But I'm not sure. Or is there something wrong with the recursion?<issue_comment>username_1: Because an `int` in Java is **32 bit** and can only hold `2^31 - 1` (2147483647) as maximum value (see also [Integer.MAX\_VALUE](https://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#MAX_VALUE)).
Anything bigger than that is not `int`, but `long` (except if it's even bigger than `Long.MAX_VALUE`, in which case you need to get the value as `BigInteger`.)
See [Integer.MAX\_VALUE](https://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#MAX_VALUE), [Long.MAX\_VALUE](https://docs.oracle.com/javase/7/docs/api/java/lang/Long.html#MAX_VALUE),
[Scanner.nextInteger()](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextInteger()), [Scanner.nextLong()](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextLong()),
[Scanner.nextBigInteger()](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextBigInteger()) and [BigInteger](https://docs.oracle.com/javase/7/docs/api/java/math/BigInteger.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `nextLong()` if you have larg "`long`" integers to read like so :
```
private static void programRunner() {
System.out.print("Insert your number:");
Scanner in = new Scanner(System.in);
if (in.hasNextLong())
{
long testNumber = in.nextLong();
boolean result = palindromeTester(testNumber);
System.out.println(result);
programRunner();
}
else
{
System.exit(0);
}
}
```
Upvotes: 0 <issue_comment>username_3: It seems that you've exceeded the range limit for the `int` type. the `long` type seems like what you're looking for.
So, you can either use the `hasNextLong()` and `nextLong()` methods of the `Scanner` class or as @Hovercraft Full Of Eels has suggested in the comments since you're not using the data in a numerical way, it may be better to receive the data as a `String` instead.
Lastly, but not least I find it inefficient to use recursion here as each recursive call creates a new stack frame along with that for each recursive invocation you're newing up a `Scanner` instance. it's better to use a `while` loop instead.
Upvotes: 0
|
2018/03/18
| 466 | 1,550 |
<issue_start>username_0: I have an assignment to count all the changes from all the files from a git open-source project. I know that if I use:
```
git log --pretty=oneline | wc -l
```
I will get the number of changes of that file ( I use git bash on Windows 10)
My idea is to use
```
find .
```
and to redirect the output to the git command. How can I do the redirecting? I tried :
```
$ find . > git log --pretty=online | wc -l
0
find: unknown predicate `--pretty=online'
```
and
```
$ find . | git log --pretty=online | wc -l
fatal: invalid --pretty format: online
0
```<issue_comment>username_1: You'll need to loop over the results of `find`.
```
find -type f | grep -v '^\./\.git' |
while read f; do
count=$(git log --oneline ${f} | wc -l)
echo "${f} - ${count}"
done | grep -v ' 0$'
```
Your `find` is okay, but I'd restrict it to just files (`git` doesn't track directories explicitly) and remove the `.git` folder (we don't care about those files). Pipe that into a loop (I'm using a `while`), and then your `git log` command works just fine. Lastly, I'm going to strip anything with a count of 0, since I may have files that are part of `.gitignore` I don't want to show up (e.g., things in `__pycache__`).
Upvotes: 0 <issue_comment>username_2: You can do *much* better than that,
```
git log --pretty='' --name-only | sort | uniq -c
```
That's "show only the names of files changed in each commit, no other metadata, sort that list so uniq can easily count the occurrences of each'
Upvotes: 3 [selected_answer]
|
2018/03/18
| 677 | 2,564 |
<issue_start>username_0: I have this in my route.ts
```
path: 'profile/:id',
component: ProfileComponent,
canActivate: [SiteRouteGuard],
children: [
{
path: 'albums',
component: AlbumsComponent,
canActivate: [SiteRouteGuard]
},
...
]
```
Now in my profile.ts I got this
```
localhost:4200/profile/45666
```
and when I route to album.ts that is chlidren of my profile.ts
```
localhost:4200/profile/45666/albums
```
Now I'm in my album.ts how can I get the id 45666?
I tried to use ActivatedRoute:
```
console.log(this._route.snapshot.params.id) >> undefined
```<issue_comment>username_1: You can use the `ActivatedRoute` class from `angular/router`
```
import { ActivatedRoute, Router } from "@angular/router"
public activatedRoute: ActivatedRoute
this.activatedRoute.parent.params // return observable
```
if you want the static value, then ask for
```
this.activatedRoute.parent.snapshot.params // return static values
```
By using the attribute parent, you can access to all the parameters and queries from the parent component
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you are using activatedRoute you can get id in ngOnInit() method. like
```
this.activatedRoute.params.subscribe((params: Params) => {
var id = params['id'];
});
```
Upvotes: 1 <issue_comment>username_3: You need to import ActivatedRoute from @angular/router:
```
import { ActivatedRoute } from "@angular/router";
```
Then in your constructor of AlbumsComponent you can grab that :id parameter
like this:
```
let id = this.activatedRoute.parent.snapshot.params["id"]
```
Hope it helps!
Upvotes: 2 <issue_comment>username_4: Since angular `5.2.x` there is another way to access parent params.
All you have to do is set config option [paramsInheritanceStrategy](https://angular.io/api/router/ExtraOptions#paramsInheritanceStrategy) to `'always'` like this:
```
RouterModule.forRoot(routes, {
paramsInheritanceStrategy: 'always'
})
```
With this config, you will have access to all ancestor routes params via prototypal inheritance, so you can get it directly from activated route:
`this.activatedRoute.snapshot.params`
Upvotes: 4 <issue_comment>username_5: Do it like this
```
import { ActivatedRoute } from "@angular/router";
let idparam = this.activatedRoute.parent.snapshot.params["idparam"]
```
Upvotes: 1
|
2018/03/18
| 733 | 3,001 |
<issue_start>username_0: I've created a new WinForms project in Visual Studio 2017.
Then I've added a button and textbox to Form1 ([screenshot](https://i.stack.imgur.com/QjHal.png)).
Code:
```
using System;
using System.Net;
using System.Windows.Forms;
namespace TestWinForms
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private delegate void FormDelegate();
private void button1_Click(object sender, EventArgs e)
{
UseWaitCursor = true;
button1.Enabled = false;
BeginInvoke(new FormDelegate(delegate
{
using (WebClient web = new WebClient())
{
web.Encoding = System.Text.Encoding.UTF8;
textBox1.Text = web.DownloadString("https://stackoverflow.com/");
}
UseWaitCursor = false;
button1.Enabled = true;
}));
}
}
}
```
When I click button1 window cursor isn't changing to WaitCursor, and when I hover cursor over ControlBox buttons they aren't "glowing". In short, `BeginInvoke()` blocks main thread for a moment. Why this is happening and how can I avoid it?<issue_comment>username_1: As the fellow users said in the comments, it's `DownloadString` that's blocking your UI, not `BeginInvoke` since it waits for the download to complete.
You should probably tackle this in another way, by using `DownloadStringAsync`:
```
private WebClient _web;
private void button1_Click(object sender, EventArgs e)
{
UseWaitCursor = true;
button1.Enabled = false;
_web = new WebClient();
_web.Encoding = System.Text.Encoding.UTF8;
_web.DownloadStringCompleted += DownloadCompleted;
_web.DownloadStringAsync("https://stackoverflow.com/");
}
private void DownloadCompleted(object sender, DownloadStringCompletedEventArgs e)
{
textBox1.Text = e.Result;
UseWaitCursor = false;
button1.Enabled = true;
_web.Dispose();
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I second Hans comment : the BeginInvoke only defer execution later.
What you need is either a BackgroundWorker or (better) using the [async/await pattern](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/):
```
private async void button1_Click(object sender, EventArgs e)
{
UseWaitCursor = true;
button1.Enabled = false;
using (WebClient web = new WebClient())
{
web.Encoding = System.Text.Encoding.UTF8;
textBox1.Text = await web.DownloadStringTaskAsync("https://stackoverflow.com/");
}
UseWaitCursor = false;
button1.Enabled = true;
};
```
}
The DownloadStringTaskAsync will run on a worker process because it is awaitable. While it runs, the UI thread will continue to process the other events anyway, then continue its execution after the await statement as the DownloadStringTaskAsync finishes.
Upvotes: 1
|
2018/03/18
| 363 | 889 |
<issue_start>username_0: Consider a vector `x`:
```
set.seed(123)
x <- rnorm(100)
```
We can get the order of `x` and the ordered `x` with:
```
y <- order(x)
z <- x[y]
```
If I only kown `y` and `z`, how can I get original `x`?<issue_comment>username_1: EDIT:
The old solution (to the original question) is:
```
set.seed(123)
x <- rnorm(100)
y <- order(x)
z <- x[order(y)]
x2 <- z[y]
identical(x2, x)
```
Since the question now states the `z` as:
```
y <- order(x)
z <- x[y]
```
The answer is obviously (as in @username_2's answer):
```
x2<-numeric(length(x))
x2[y]<-z
identical(x2, x)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your initial way of doing things is wrong as @42 said. The correct way of sorting x is:
```
y<-order(x)
z<-x[y]
```
Then you can get back the original x as:
```
x2<-numeric(length(x))
x2[y]<-z
identical(x2, x)
```
Upvotes: 1
|
2018/03/18
| 856 | 3,159 |
<issue_start>username_0: Suppose I have the following function, `doWork`, that starts some work in a goroutine and returns a `Result` to check for completion and error:
```
func doWork() *Result {
r := Result{doneCh: make(chan struct{})}
go func() {
var err error
defer func() {
r.err = err
close(r.doneCh)
}()
// do some work
}()
return &r
}
```
where `Result` is the following struct:
```
type Result struct {
doneCh chan struct{}
err error
}
// doneCh returns a closed chan when the work is done.
func (r *Result) Done() <-chan struct{} {
return r.doneCh
}
// Err returns a non-nil err if the work failed.
// Don't call Err until Done returns a closed chan.
func (r *Result) Err() error {
return r.err
}
```
is this code thread safe if I set `err` before closing `doneCh`:
```
defer func() {
r.err = err
close(r.doneCh)
}()
```
or is the compiler free to order the `r.err = err` and `close(r.doneCh)` instructions as it likes, in which case I'd need a mutex to prevent concurrent read/writes on error.<issue_comment>username_1: The compiler may not reorder the assignment and close statement, so you do not need a mutex if callers are well-behaved and do as instructed by your docs.
This is explained in [The Go Memory Model, Channel Communication](https://golang.org/ref/mem#tmp_7).
Upvotes: 2 [selected_answer]<issue_comment>username_2: It is thread-safe only if your comments are obeyed and `Err()` is never called until a read from `Done()` returns.
You could simply make `Err()` blocking though by re-implementing it as:
```
func (r *Result) Err() error {
<-r.doneCh
return r.err
}
```
Which would guarantee that `Err()` only returns after done is complete. Given that err will be nil until the work errors, you have no way of telling if `Err()` is returning successfully because work was finished or because it hasn't completed or errored yet unless you block on `Done()` first, in which case why not just make `Err()` blocking?
Upvotes: 2 <issue_comment>username_3: Have you tried using `chan error` and testing if the channel is opened or closed on reception?
```
package main
import (
"errors"
"fmt"
)
func delegate(work func(ch chan error)) {
ch := make(chan error)
go work(ch)
for {
err, opened := <- ch
if !opened {
break
}
// Handle errors
fmt.Println(err)
}
}
func main() {
// Example: error
delegate(func(ch chan error) {
defer close(ch)
// Do some work
fmt.Println("Something went wrong.")
ch <- errors.New("Eyyyyy")
})
// Example: success
delegate(func(ch chan error) {
defer close(ch)
// Do some work
fmt.Println("Everything went fine.")
})
// Example: error
delegate(func(ch chan error) {
defer close(ch)
// Do some work
fmt.Println("Something went wrong more than once.")
ch <- errors.New("Eyyyyy 1")
ch <- errors.New("Eyyyyy 2")
ch <- errors.New("Eyyyyy 3")
})
}
```
Upvotes: 0
|
2018/03/18
| 336 | 963 |
<issue_start>username_0: I have question to regarding a query I am writing:
I have two different (columns) data sets which I want to combine the result of, for instance:
```
Set 1: Set 2:
1 A
2 B
3 C
```
I want to create a query with the result:
```
resultset:
1 A
1 B
1 C
2 A
2 B
2 C
3 A
3 B
3 C
```
Is this possible using a JOIN or UNION?
So I get all possible combinations between the different rows?
Thank you for helping out!<issue_comment>username_1: You are looking for `cross join`:
```
select t1.col1, t2.col1
from table1 t1 cross join
table2 t2
order by t1.col1, t2.col1;
```
Upvotes: 2 <issue_comment>username_2: What you want to do is a [cartesian join](https://www.tutorialspoint.com/sql/sql-cartesian-joins.htm)
```
SELECT
S1.col1,
S2.col2
FROM
S1,S2
```
Please see the example in the [fiddle](https://www.db-fiddle.com/f/cun4CGEcFrJUcdWdH2J7LK/0)
Upvotes: 2
|
2018/03/18
| 958 | 3,375 |
<issue_start>username_0: i am trying to implement the datepicker using the django bootstrap datepicker widget, but the datepicker is not showing on the webpage ("welcome.html)
i have passed the form to the views and called the form in the webpage
any help will be appreciated
**forms.py**
```
class DateForm(forms.Form):
todo = forms.CharField(
widget=forms.TextInput(attrs={"class":"form-control"}))
date_1=forms.DateField(widget=DatePicker(
options={"format": "mm/dd/yyyy",
"autoclose": True }))
def cleandate(self):
c_date=self.cleaned_data['date_1']
if c_date < datetime.date.today():
raise ValidationError(_('date entered has passed'))
elif c_date > datetime.date.today():
return date_1
def itamdate(self):
date_check=calendar.setfirstweekday(calendar.SUNDAY)
if c_date:
if date_1==date_check:
pass
elif date_1!=date_check:
raise ValidationError('The Market will not hold today')
for days in list(range(0,32)):
for months in list(range(0,13)):
for years in list(range(2005,2108)):
continue
```
**views.py**
```
def imarket(request):
#CREATES A FORM INSTANCE AND POPULATES USING DATE ENTERED BY THE USER
if request.method=='POST':
form = DateForm(request.POST or None)
#checks validity of the form
if form.is_valid():
itamdate=form.cleaned_data['date_1']
return HttpResponseRedirect(reverse('welcome.html'))
return render(request, 'welcome.html', {'form':form})
```
**welcome.html**
```
{% extends 'base.html' %}
{% load bootstrap %}
{% block content %}
welcome
{% csrf\_token %}
{{form|bootstrap\_horizontal}}
{% endblock %}
```<issue_comment>username_1: The bootstrap datepicker wasn't working , so i added the jquery datepicker to directly to my forms. py using this [answer](https://stackoverflow.com/a/24652214/8796032)
my updated **forms.py**
```
class DateForm(forms.Form):
todo = forms.CharField(
widget=forms.TextInput(attrs={"class":"form-control"}))
date_1=forms.DateField(widget=forms.DateInput(attrs={'class':'datepicker'}))
```
my updated **welcome.html**
```
{% extends 'base.html' %}
{% load bootstrap %}
{% block content %}
welcome
{% csrf\_token %}
{{form|bootstrap}}
$(document).ready(function() {
$('.datepicker').datepicker();
});
{% endblock %}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: datepicker should be initialized in the html page.
Upvotes: 0 <issue_comment>username_3: if you are using bootstrap download datepicker from [Datetimepicker for Bootstrap 4](https://github.com/pingcheng/bootstrap4-datetimepicker)
then
in the head section include
1. font awesome css
2. boostrap4 css
3. bootstrap-datetimepicker.min.css
in the body section include
1. moment.min.js
2. bootstrap.min.js
3. bootstrap-datetimepicker.min.js
define your form
```
from django import forms
class DateRangeForm(forms.Form):
start_date = forms.DateTimeField(required=True)
end_date = forms.DateTimeField(required=True)
```
in the template render the form
```
{% csrf\_token %}
{{ dt\_range\_form }}
```
in the script section add
```
$("#id_start_date").datetimepicker();
```
And here you have your datetimepicker! For me Time icon is still not showing up! Need to debug it.
Upvotes: 0
|
2018/03/18
| 754 | 2,848 |
<issue_start>username_0: I am getting below sample json data as input,
```
var JSONDATA =
{
"MainNode": {
"_attributes": {
"class": "ABC",
"projectclass": "MyProject",
"prjname": "PrjName",
"enabled":"true"
},
"PrjProp": {
"_attributes": {
"name": "MyProject.save"
},
"_text": true
}
}
}
```
Using Jquery or Javascript, I want to get the `"projectclass"` value in first "\_attributes". There could be multiple `"_attributes"` in JSON object but the requirement to the `"projectclass"` (fixed) from the first (fixed) `"_attributes"` only.
This can be achieved like,
console.log(JSONDATA.MainNode.\_attributes.testclass);
but `"MainNode"` is not fixed, this can be `"OtherNode"`. So how to handle this is variable ? I tried , `console.log(Object.keys($scope.testplan)[0]);` which shows main node name but how to use this in `console.log(JSONDATA.MainNode._attributes.testclass);` as variable ?
Please suggest.
Thanks<issue_comment>username_1: You have to access as `JSONDATA.MyProject._attributes.projectclass`
Using `[0]` is accessing the first element of an array. This is an object, so that is the reason why you are not able to access `JSONDATA[0][0].projectclass`
```js
var JSONDATA = {"MyProject":{"_attributes":{"class":"ABC","projectclass":"MyProject","prjname":"PrjName","enabled":"true"},"PrjProp":{"_attributes":{"name":"MyProject.save"},"_text":true}}};
console.log( JSONDATA.MyProject._attributes.projectclass );
```
And there is going to be only 1 `_attributes` under `MyProject`.
---
If you have multiple *projects*, You can loop by:
```js
var JSONDATA = {
"MyProject":{"_attributes":{"class":"ABC","projectclass":"MyProject","prjname":"PrjName","enabled":"true"},"PrjProp":{"_attributes":{"name":"MyProject.save"},"_text":true}},
"MyOtherProject":{"_attributes":{"class":"ABC","projectclass":"MyOtherProject","prjname":"PrjName","enabled":"true"},"PrjProp":{"_attributes":{"name":"MyProject.save"},"_text":true}}
};
for ( var key in JSONDATA ) {
console.log( JSONDATA[key]._attributes.projectclass );
}
```
Upvotes: 2 <issue_comment>username_2: you cant use `JSONDATA[0][0]` to acces a JSON object
for example `data` is your JSON object to access `projectclass` simply
```
data={
"MyProject": {
"_attributes": {
"class": "ABC",
"projectclass": "MyProject",
"prjname": "PrjName",
"enabled": "true"
},
"PrjProp": {
"_attributes": {
"name": "MyProject.save"
},
"_text": true
}
}
}
console.log(data.MyProject._attributes.projectclass);
```
Upvotes: 0
|
2018/03/18
| 1,210 | 4,216 |
<issue_start>username_0: in my table I have an attribute called 'messages' with this exact data type:
```
ARRAY, id STRING, message STRING>>
```
and I have defined a UDF named my\_func()
Because UDF function in Big Query don't support the type DATETIME I need to cast the attribute created\_time.
So I tried this:
```
safe_cast ( messages as ARRAY,
id STRING, message STRING>>) as messages\_casted
```
and I get this error
```
Casting between arrays with incompatible element types is not
supported: Invalid cast from...
```
There is an error in the way I cast an array struct?
There is some way to use UDF with this data structure or the only way is to flatten the array and do the cast?
My goal is to take the array in the JS execution environment in order to make the aggregation with JS code.<issue_comment>username_1: When working with JavaScript UDFs, you don't need to specify complex input data types explicitly. Instead, you can use the [`TO_JSON_STRING` function](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#to_json_string). In your case, you can have the UDF take `messages` as a `STRING`, then parse it inside the UDF. You would call `my_func(TO_JSON_STRING(messages))`, for example.
Here is an example [from the documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/user-defined-functions):
```
CREATE TEMP FUNCTION SumFieldsNamedFoo(json_row STRING)
RETURNS FLOAT64
LANGUAGE js AS """
function SumFoo(obj) {
var sum = 0;
for (var field in obj) {
if (obj.hasOwnProperty(field) && obj[field] != null) {
if (typeof obj[field] == "object") {
sum += SumFoo(obj[field]);
} else if (field == "foo") {
sum += obj[field];
}
}
}
return sum;
}
var row = JSON.parse(json_row);
return SumFoo(row);
""";
WITH Input AS (
SELECT STRUCT(1 AS foo, 2 AS bar, STRUCT('foo' AS x, 3.14 AS foo) AS baz) AS s, 10 AS foo UNION ALL
SELECT NULL, 4 AS foo UNION ALL
SELECT STRUCT(NULL, 2 AS bar, STRUCT('fizz' AS x, 1.59 AS foo) AS baz) AS s, NULL AS foo
)
SELECT
TO_JSON_STRING(t) AS json_row,
SumFieldsNamedFoo(TO_JSON_STRING(t)) AS foo_sum
FROM Input AS t;
+---------------------------------------------------------------------+---------+
| json_row | foo_sum |
+---------------------------------------------------------------------+---------+
| {"s":{"foo":1,"bar":2,"baz":{"x":"foo","foo":3.14}},"foo":10} | 14.14 |
| {"s":null,"foo":4} | 4 |
| {"s":{"foo":null,"bar":2,"baz":{"x":"fizz","foo":1.59}},"foo":null} | 1.59 |
+---------------------------------------------------------------------+---------+
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> Because UDF function in Big Query don't support the type DATETIME I need to cast the attribute created\_time.
>
>
>
Below is for BigQuery Standard SQL and is a very simple way of Casting specific element of `ARRAY` leaving everything else as is.
So in below example - it `CAST`s created\_time from `DATETIME` to `STRING` (you can use any compatible type you need in your case though)
```
#standardSQL
SELECT messages,
ARRAY(SELECT AS STRUCT * REPLACE (SAFE_CAST(created_time AS STRING) AS created_time)
FROM UNNEST(messages) message
) casted_messages
FROM `project.dataset.table`
```
If you run it against your data - you will see original and casted messages - all elements should be same (value/type) with exception of (as expected) created\_time which will be of casted type (STRING in this particular case) or NULL if not compatible
You can test / play with above using dummy data as below
```
#standardSQL
WITH `project.dataset.table` AS (
SELECT [STRUCT,
id STRING,
message STRING>
(DATETIME '2018-01-01 13:00:00', ('1', 'mike', 'zzz@ccc'), 'a', 'abc'),
(DATETIME '2018-01-02 14:00:00', ('2', 'john', 'yyy@bbb'), 'b', 'xyz')
] messages
)
SELECT messages,
ARRAY(SELECT AS STRUCT \* REPLACE (SAFE\_CAST(created\_time AS STRING) AS created\_time)
FROM UNNEST(messages) message
) casted\_messages
FROM `project.dataset.table`
```
Upvotes: 1
|
2018/03/18
| 1,276 | 4,290 |
<issue_start>username_0: I'd want a query to fetch all columns of a tuple according to a column's lowest value in the table:
```
SELECT *, MIN(Opened) FROM Account
```
And for some reason this throws an error:
>
> ORA-00923: FROM keyword not found where expected
>
>
>
This is basic stuff in my opinion, coming from SQLite, but Oracle is new to me.
I created a table like:
```
CREATE TABLE Account (
Act_id INTEGER,
Opened DATE,
Balance NUMBER(10,2),
CONSTRAINT act_pk PRIMARY KEY(act_id)
)
```
If I enter any other column name (e.g. `SELECT balance, MIN(opened)`), it gives another error:
>
> ORA-00937: not a single-group group function
>
>
>
According to [this](http://www.learncertification.com/study-material/select-all-columns-in-oracle-sql) tutorial at least the `*` notation is in Oracle. And the `MIN` works by itself. Am I missing something?<issue_comment>username_1: When working with JavaScript UDFs, you don't need to specify complex input data types explicitly. Instead, you can use the [`TO_JSON_STRING` function](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#to_json_string). In your case, you can have the UDF take `messages` as a `STRING`, then parse it inside the UDF. You would call `my_func(TO_JSON_STRING(messages))`, for example.
Here is an example [from the documentation](https://cloud.google.com/bigquery/docs/reference/standard-sql/user-defined-functions):
```
CREATE TEMP FUNCTION SumFieldsNamedFoo(json_row STRING)
RETURNS FLOAT64
LANGUAGE js AS """
function SumFoo(obj) {
var sum = 0;
for (var field in obj) {
if (obj.hasOwnProperty(field) && obj[field] != null) {
if (typeof obj[field] == "object") {
sum += SumFoo(obj[field]);
} else if (field == "foo") {
sum += obj[field];
}
}
}
return sum;
}
var row = JSON.parse(json_row);
return SumFoo(row);
""";
WITH Input AS (
SELECT STRUCT(1 AS foo, 2 AS bar, STRUCT('foo' AS x, 3.14 AS foo) AS baz) AS s, 10 AS foo UNION ALL
SELECT NULL, 4 AS foo UNION ALL
SELECT STRUCT(NULL, 2 AS bar, STRUCT('fizz' AS x, 1.59 AS foo) AS baz) AS s, NULL AS foo
)
SELECT
TO_JSON_STRING(t) AS json_row,
SumFieldsNamedFoo(TO_JSON_STRING(t)) AS foo_sum
FROM Input AS t;
+---------------------------------------------------------------------+---------+
| json_row | foo_sum |
+---------------------------------------------------------------------+---------+
| {"s":{"foo":1,"bar":2,"baz":{"x":"foo","foo":3.14}},"foo":10} | 14.14 |
| {"s":null,"foo":4} | 4 |
| {"s":{"foo":null,"bar":2,"baz":{"x":"fizz","foo":1.59}},"foo":null} | 1.59 |
+---------------------------------------------------------------------+---------+
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> Because UDF function in Big Query don't support the type DATETIME I need to cast the attribute created\_time.
>
>
>
Below is for BigQuery Standard SQL and is a very simple way of Casting specific element of `ARRAY` leaving everything else as is.
So in below example - it `CAST`s created\_time from `DATETIME` to `STRING` (you can use any compatible type you need in your case though)
```
#standardSQL
SELECT messages,
ARRAY(SELECT AS STRUCT * REPLACE (SAFE_CAST(created_time AS STRING) AS created_time)
FROM UNNEST(messages) message
) casted_messages
FROM `project.dataset.table`
```
If you run it against your data - you will see original and casted messages - all elements should be same (value/type) with exception of (as expected) created\_time which will be of casted type (STRING in this particular case) or NULL if not compatible
You can test / play with above using dummy data as below
```
#standardSQL
WITH `project.dataset.table` AS (
SELECT [STRUCT,
id STRING,
message STRING>
(DATETIME '2018-01-01 13:00:00', ('1', 'mike', 'zzz@ccc'), 'a', 'abc'),
(DATETIME '2018-01-02 14:00:00', ('2', 'john', 'yyy@bbb'), 'b', 'xyz')
] messages
)
SELECT messages,
ARRAY(SELECT AS STRUCT \* REPLACE (SAFE\_CAST(created\_time AS STRING) AS created\_time)
FROM UNNEST(messages) message
) casted\_messages
FROM `project.dataset.table`
```
Upvotes: 1
|
2018/03/18
| 953 | 3,589 |
<issue_start>username_0: I have a HashMap of Products. Each Product has a Price. I know how to find the Product with the max Price. But using Java 8 Streams is really puzzling me. I tried this but no luck:
```
public Product getMostExpensiveProduct(HashMap items) {
Product maxPriceProduct = items.entrySet()
.stream()
.reduce((Product a, Product b) ->
a.getPrice() < b.getPrice() ? b : a);
return maxPriceProduct;
}
```<issue_comment>username_1: The first problem is that since you want to find the max `Product` by price then it's better to use `items.values()` as the source for the stream as then you'll have a `Stream` instead of a `Stream>`.
Second, the `reduce` operation doesn't have the correct types. So, to make your current code work you'll need to do something along the lines of:
```
Optional> result =
items.entrySet()
.stream()
.reduce((Map.Entry a, Map.Entry b) ->
a.getValue().getPrice() < b.getValue().getPrice() ? b : a);
return result.isPresent() ? result.get().getValue() : null;
```
Third, this overload of the `reduce` operation returns an `Optional` so the receiver type of the result set must be an `Optional` as shown above.
Above, we're returning `null` in the case where there is no value present in the Optional.
A much better solution would be to make the method return a type `Optional`. This documents to you or your colleagues and all future users of your method that it might give an empty value as result.
This is a better alternative to returning `null` as it documents as well as ensures that the user of this method unwraps the return value safely.
nullity can be dangerous at times and leveraging Optional's where appropriate can take you a long way.
With all that taking into account your code becomes:
```
// example without returning an `Optional`
public Product getMostExpensiveProduct(HashMap items) {
Optional maxPriceProduct =
items.values()
.stream()
.reduce((Product a, Product b) ->
a.getPrice() < b.getPrice() ? b : a);
return maxPriceProduct.orElse(null);
}
```
// example returning an `Optional`
```
public Optional getMostExpensiveProduct(HashMap items) {
Optional maxPriceProduct =
items.values()
.stream()
.reduce((Product a, Product b) ->
a.getPrice() < b.getPrice() ? b : a);
return maxPriceProduct;
}
```
Anyhow, the `max` method is more appropriate for this task instead of the `reduce`, so it can all be improved to:
```
Optional maxPriceProduct =
items.values()
.stream()
.max(Comparator.comparingInt(Product::getPrice));
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Just noticed that passing a `HashMap` makes no sense since you are not using the `Key` in any way, so this could have been written to take a `Collection` as an argument, so it would become:
```
public Product getMostExpensiveProduct(Collection products) {
return Collections.max(products, Comparator.comparingInt(Product::getPrice));
}
```
But then, this would be a method with a single line, so you could just call it in place where you need it instead:
```
Collections.max(items.values(), Comparator.comparingInt(Product::getPrice));
```
The downside is that this would throw a `NoSuchElementException` if the incoming `Collection` is empty.
*Or* if you care about all the checks of null and empty collection (and you should):
```
public Optional getMostExpensiveProduct(Collection products) {
if(products == null || products.isEmpty()){
return Optional.empty();
}
return Optional.of(Collections.max(
products, Comparator.comparingInt(Product::getPrice)));
}
```
Upvotes: 2
|
2018/03/18
| 430 | 1,565 |
<issue_start>username_0: I did some research and there are many questions about changing src of an image in JavaScript. However I want to change src of three images when clicking on button at once.
HTML:
```
* [Shoes](#!)
* [T-shirts](#)


```
JavaScript:
```
function changeImage(){
let change = document.querySelector('#imgChange').src="img/shoe.png";}
```
So this code works perfectly but it changes only one image. I want to change 2 or more images at one click. I've tried to get element by class but it still changes only first image. So in this case I want to change image in the second div too. Thanks in advance!<issue_comment>username_1: `document.querySelector` selects only the first element and `document.querySelector('#imgChange')` will select first element which have the `id`.Use `document.querySelectorAll('img')` it will select all the `img` tag. Use `forEach` to iterate the loop and change the `src`
```js
function changeImage() {
let change = document.querySelectorAll('img').forEach(function(item) {
item.src = "img/shoe.png";
})
}
```
```html
* [Shoes](#!)
* [T-shirts](#)


```
Upvotes: 3 [selected_answer]<issue_comment>username_2: querySelector() method returns the first element that matches a specified CSS selector(s) in the document.
Try
`var elements = document.getElementsByClassName(names)`
to get all elements with the same class name. Or, try `var element = document.getElementById(id)` for finer selection.
Upvotes: 0
|
2018/03/18
| 386 | 1,382 |
<issue_start>username_0: I need my web application to behave as a native mobile app. For this I needed to prevent pinch-to-zoom and double tap zooming on all browsers. In Chrome and Firefox it was easy:
```
```
On Safari its a challenge.
[Here](https://stackoverflow.com/a/38573198/8864418) I found how to prevent pinch-to-zoom and to disable double-tap-zoom. But when you are scrolling and pinching to zoom, its zooming. My question is if there is way to block it also on scrolling?<issue_comment>username_1: `document.querySelector` selects only the first element and `document.querySelector('#imgChange')` will select first element which have the `id`.Use `document.querySelectorAll('img')` it will select all the `img` tag. Use `forEach` to iterate the loop and change the `src`
```js
function changeImage() {
let change = document.querySelectorAll('img').forEach(function(item) {
item.src = "img/shoe.png";
})
}
```
```html
* [Shoes](#!)
* [T-shirts](#)


```
Upvotes: 3 [selected_answer]<issue_comment>username_2: querySelector() method returns the first element that matches a specified CSS selector(s) in the document.
Try
`var elements = document.getElementsByClassName(names)`
to get all elements with the same class name. Or, try `var element = document.getElementById(id)` for finer selection.
Upvotes: 0
|
2018/03/18
| 488 | 1,790 |
<issue_start>username_0: Porting code over from VS for Windows to VS for MAC. Part of one function opens a file using File.Open (System.IO) with a string that, on windows at least, would provide the absolute path and file to open. The same code (on VS for MAC seems to bolster the string to include the path to the current project dir binary.
To give an example here is what I'd 'like' to do:
```
File.Open("~/Projects/TestProg/UseThisFile.txt", FileMode.Open)
```
However, what occurs is an IO exception that indicates the following:
>
> System.IO.FileNotFoundException Could not find file
> "/Users//Projects/TestProg/bin/Debug/TestProg.app/Contents/Resources/~/Projects/TestProg/UseThisFile.txt"
>
>
>
Just wondering what I am missing and/or whether I should even be using File.Open for this rather the NSFileManager (rather not have to rebase a bunch of code if I don't have to)?<issue_comment>username_1: `document.querySelector` selects only the first element and `document.querySelector('#imgChange')` will select first element which have the `id`.Use `document.querySelectorAll('img')` it will select all the `img` tag. Use `forEach` to iterate the loop and change the `src`
```js
function changeImage() {
let change = document.querySelectorAll('img').forEach(function(item) {
item.src = "img/shoe.png";
})
}
```
```html
* [Shoes](#!)
* [T-shirts](#)


```
Upvotes: 3 [selected_answer]<issue_comment>username_2: querySelector() method returns the first element that matches a specified CSS selector(s) in the document.
Try
`var elements = document.getElementsByClassName(names)`
to get all elements with the same class name. Or, try `var element = document.getElementById(id)` for finer selection.
Upvotes: 0
|
2018/03/18
| 727 | 2,842 |
<issue_start>username_0: I am trying to fetch the records from an API but it returns unsuccessful response when I am using the below code. But at the same time, it is giving me proper response with status code 200 in postman. Here is my code. Please help:
```
import React, {Component} from "react";
const urlEndPoint = myEndPoint => "https://api.github.com/users/adhishlal"
export default class Categories extends Component {
constructor(props) {
super(props)
this.state = {
requestFailed: false
}
}
componentDidMount() {
fetch(urlEndPoint(this.props.myEndPoint))
.then(response => {
if (!response.ok) {
throw Error(response.statusText)
}
return response
})
.then(d => response.json())
.then(response => {
this.setState({
responseData: d
})
}, () =>
{
this.setState({
requestFailed: true
})
})
}
render() {
//If request is failed
if(this.state.requestFailed)
{
return Request Failed
}
//Waiting for data to load from Zoho
if(!this.state.responseData)
{
return Loading...
}
//Finally populate the data when loaded
var indents = [];
for (var i = 0; i < 10; i++) {
indents.push(
{i+1}
{this.state.responseData.name}
Bangalore , Chennai , Delhi NCR , Hyderabad , Mumbai
);
}
return (
{indents}
);
}
}
```
Can you suggest where I am making the mistake? You can run the API in your restclient or Postman to see the response.<issue_comment>username_1: I haven't completed your code but will show you how to get the data using fetch.
```
fetch('https://api.github.com/users/adhishlal')
.then(response => {
response.json().then(function(data){
console.log(data);
})
```
the `data` in the code will be the data you require.
>
> The response of a fetch() request is a Stream object, which means that when we call the json() method, a Promise is returned since the reading of the stream will happen asynchronously.
>
>
>
In my opinion using [axios](https://github.com/axios/axios) instead will make life a lot easier.
Hope this helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: The code written above has few errors. Incorrect values have been returned at some places. Similarly, setState is done for wrong variable. Fixing those will make it run fine.
I have updated the piece of code which is causing the problem
```
.then(d => d.json()) //Return d.json() and not response.json()
.then(response => {
this.setState({
responseData: response, //setState to response and not d
})
}, () =>
```
Upvotes: 0
|
2018/03/18
| 611 | 2,268 |
<issue_start>username_0: I'm using the django-filter package. My page displays all books. I have 5 genres. I want someone to be able to click a "scary" button, and have that filter my book list (user shouldn't have to also click "submit", just the genre button).
But right now when I click a genre button, it doesn't filter. Unless I use a checkbox widget in my filters.py (commented out), check the box, and then click the genre button, but this looks ugly.
Thanks in advance for the help!
**Filters.py**
```
class BookFilter(django_filters.FilterSet):
genre = django_filters.ModelMultipleChoiceFilter(queryset=Genre.objects.all(),
#widget=CheckboxSelectMultiple)
class Meta:
model = Book
fields = ['genre']
```
**book\_list.html**
```
{% for genre in filter.form.genre %}
* {{ genre }}
{% endfor %}
```
**Update**
Thanks to GwynBleidD for the help! I also had to change {% for genre in filter.form.genre %} to {% for genre in genre\_list %} where in my views, genre\_list=Genre.objects.all(). Otherwise, the button value was value=option value=10 instead of value=10.<issue_comment>username_1: I haven't completed your code but will show you how to get the data using fetch.
```
fetch('https://api.github.com/users/adhishlal')
.then(response => {
response.json().then(function(data){
console.log(data);
})
```
the `data` in the code will be the data you require.
>
> The response of a fetch() request is a Stream object, which means that when we call the json() method, a Promise is returned since the reading of the stream will happen asynchronously.
>
>
>
In my opinion using [axios](https://github.com/axios/axios) instead will make life a lot easier.
Hope this helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: The code written above has few errors. Incorrect values have been returned at some places. Similarly, setState is done for wrong variable. Fixing those will make it run fine.
I have updated the piece of code which is causing the problem
```
.then(d => d.json()) //Return d.json() and not response.json()
.then(response => {
this.setState({
responseData: response, //setState to response and not d
})
}, () =>
```
Upvotes: 0
|
2018/03/18
| 457 | 1,743 |
<issue_start>username_0: I am trying to push my code to the beanstalk but I am getting an error when I hit the `eb create` command
```
WARNING: You have uncommitted changes.
Starting environment deployment via CodeCommit
Could not push code to the CodeCommit repository:
ERROR: CommandError - An error occurred while handling git command.
Error code: 128 Error: fatal: unable to access 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/origin/': The requested URL returned error: 403
```
I have already created an environment using aws beanstalk, how I should push to that.<issue_comment>username_1: I haven't completed your code but will show you how to get the data using fetch.
```
fetch('https://api.github.com/users/adhishlal')
.then(response => {
response.json().then(function(data){
console.log(data);
})
```
the `data` in the code will be the data you require.
>
> The response of a fetch() request is a Stream object, which means that when we call the json() method, a Promise is returned since the reading of the stream will happen asynchronously.
>
>
>
In my opinion using [axios](https://github.com/axios/axios) instead will make life a lot easier.
Hope this helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: The code written above has few errors. Incorrect values have been returned at some places. Similarly, setState is done for wrong variable. Fixing those will make it run fine.
I have updated the piece of code which is causing the problem
```
.then(d => d.json()) //Return d.json() and not response.json()
.then(response => {
this.setState({
responseData: response, //setState to response and not d
})
}, () =>
```
Upvotes: 0
|
2018/03/18
| 445 | 1,591 |
<issue_start>username_0: I am trying to launch a debug session for my react-native app inside vscode (I have setup a debug configuration in the ./vscode/launch.json file which has been working fine until a few days).
1. I start the debug session from the debug dropdown and I select the "expo" configuration:
[](https://i.stack.imgur.com/cYDj5.png)
2. The output window shows that the debug session starts:
[](https://i.stack.imgur.com/tWqQA.png)
And it just stucks there...
But on the bottom-right corner, I get this message:
[](https://i.stack.imgur.com/rwrJZ.png)
I thinks the debugger waits me to connect to this account. I cant seem to know how to connect to this account from within vscode. I can access all available commands from the react-native rools: no login prompt.
Has any one faced this problem? Is this something new with the exponent extension in vscode (I remember a few days ago, this problem was not there...)
Any suggestion would be really welcomed.<issue_comment>username_1: Go to terminal and log in expo using this command
`exp login`
Upvotes: 2 [selected_answer]<issue_comment>username_2: Install (globally) exp
```
npm install -g exp
```
and then use
```
exp login
```
as mentioned above and enter your expo.io credentials
Upvotes: 1 <issue_comment>username_3: Now, in 2020 we use `expo login` to login and `expo logout` to logout
Upvotes: 0
|
2018/03/18
| 872 | 3,181 |
<issue_start>username_0: I am trying to ignore case sensitivity on a string. For example, a user can put "Brazil" or "brasil" and the fun will trigger. How do I implement this? I am new to Kotlin.
```
fun questionFour() {
val edittextCountry = findViewById(R.id.editTextCountry)
val answerEditText = edittextCountry.getText().toString()
if (answerEditText == "Brazil") {
correctAnswers++
}
if (answerEditText == "Brasil") {
correctAnswers++
}
}
```
### EDIT
Another person helped me write like this. My question now about this way is "Is there a cleaner way to write this?"
```
fun questionFour() {
val edittextCountry = findViewById(R.id.editTextCountry)
val answerEditText = edittextCountry.getText().toString()
if (answerEditText.toLowerCase() == "Brazil".toLowerCase() || answerEditText.toLowerCase() == "Brasil".toLowerCase()) {
correctAnswers++
}
}
```
### Answer
```
fun questionFour() {
val edittextCountry = findViewById(R.id.editTextCountry)
val answerEditText = edittextCountry.getText().toString()
if (answerEditText.equals("brasil", ignoreCase = true) || answerEditText.equals("brazil", ignoreCase = true)) {
correctAnswers++
}
}
```<issue_comment>username_1: The core problem is that `==` just calls through to `equals()`, which is case sensitive. There are a few ways to solve this:
1) Lowercase the input and direct compare:
```
if (answerEditText.toLowerCase() == "brasil" ||
answerEditText.toLowerCase() == "brazil") {
// Do something
}
```
This is easy to understand and maintain, but if you have more than a couple of answers, it gets unwieldy.
2) Lowercase the input and test for values in a set:
```
if (answerEditText.toLowerCase() in setOf("brasil", "brazil")) {
// Do Something
}
```
Perhaps define the set as a constant somewhere (in a companion object?) to avoid recreating it a few times. This is nice and clear, useful when you have a lot of answers.
3) Ignore the case and compare via `.equals()` method:
```
if (answerEditText.equals("Brazil", true) ||
answerEditText.equals("Brasil", true)) {
// Do something
}
```
Similar to option 1, useful when you only have a coupe of answers to deal with.
4) Use a case insensitive regular expression:
```
val answer = "^Bra(s|z)il$".toRegex(RegexOption.IGNORE_CASE)
if (answer.matches(answerEditText)) {
// Do something
}
```
Again, create the `answer` regex once and store it somewhere to avoid recreating. I feel this is an overkill solution.
Upvotes: 3 <issue_comment>username_2: You can call the [`equals`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.text/equals.html) function directly, which will allow you to specify the optional parameter `ignoreCase`:
```
if (answerEditText.equals("brasil", ignoreCase = true)) {
correctAnswers++
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: We created extension function and use it so we can avoid specifying second parameter.
```
fun String.equalsIgnoreCase(other: String?): Boolean {
if (other == null) {
return false
}
return this.equals(other, true)
}
println("California".equalsIgnoreCase("CALIFORNIA"))
```
Upvotes: 1
|
2018/03/18
| 941 | 3,610 |
<issue_start>username_0: I want to add status field to error response, so instead of this:
```
{
"errors": [
{
"message": "Authentication credentials were not provided",
"locations": [
{
"line": 2,
"column": 3
}
]
}
],
"data": {
"viewer": null
}
}
```
It should be like this:
```
{
"errors": [
{
"status": 401, # or 400 or 403 or whatever error status suits
"message": "Authentication credentials were not provided",
"locations": [
{
"line": 2,
"column": 3
}
]
}
],
"data": {
"viewer": null
}
}
```
I found out that I only can change message by raising Exception inside resolver: `raise Error('custom error message')`, but how to add field?
Code example:
```
class Query(UsersQuery, graphene.ObjectType):
me = graphene.Field(SelfUserNode)
def resolve_me(self, info: ResolveInfo):
user = info.context.user
if not user.is_authenticated:
# but status attr doesn't exist...
raise GraphQLError('Authentication credentials were not provided', status=401)
return user
```<issue_comment>username_1: I didn't found a way to solve your problem int the way that you propose, otherwise i extend the `LoginRequiredMixin` class like this:
```
class LoginRequiredMixin:
def dispatch(self, info, *args, **kwargs):
if not info.user.is_authenticated:
e = HttpError(HttpResponse(status=401, content_type='application/json'), 'Please log in first')
response = e.response
response.content = self.json_encode(info, [{'errors': [self.format_error(e)]}])
return response
return super().dispatch(info, *args, **kwargs)
class PrivateGraphQLView(LoginRequiredMixin, GraphQLView):
schema=schema
```
and in your url:
```
from django.views.decorators.csrf import csrf_exempt
from educor.schema import PrivateGraphQLView
url(r'^graphql', csrf_exempt(PrivateGraphQLView.as_view(batch=True)))
```
you can't see the status with the graphiql but in your client you can get it in the headers or you could modify this line to add in the response `response.content = self.json_encode(info, [{'errors': [self.format_error(e)]}])`. Hope it helps anyway i'll leave you another possible solution <https://github.com/graphql-python/graphene-django/issues/252>
Upvotes: 1 <issue_comment>username_2: Update the default `GraphQLView` with the following:
```py
from graphene_django.views import GraphQLView as BaseGraphQLView
class GraphQLView(BaseGraphQLView):
@staticmethod
def format_error(error):
formatted_error = super(GraphQLView, GraphQLView).format_error(error)
try:
formatted_error['context'] = error.original_error.context
except AttributeError:
pass
return formatted_error
urlpatterns = [
path('api', GraphQLView.as_view()),
]
```
This will look for the `context` attribute in any exceptions raised. If it exists, it'll populate the error with this data.
Now you can create exceptions for different use cases that populate the `context` attribute. In this case you want to add the status code to errors, here's an example of how you'd do that:
```py
class APIException(Exception):
def __init__(self, message, status=None):
self.context = {}
if status:
self.context['status'] = status
super().__init__(message)
```
You'd use it like this:
```py
raise APIException('Something went wrong', status=400)
```
Upvotes: 4
|
2018/03/18
| 1,746 | 5,066 |
<issue_start>username_0: I am trying to get some encrypted connection parameter from s3 bucket in my sample Spring application. Here is the method I am using to run inside a container:
```
public void encryptionOnly_KmsManagedKey() throws NoSuchAlgorithmException {
AmazonS3Encryption s3Encryption = AmazonS3EncryptionClientBuilder
.standard()
.withRegion(Regions.US_EAST_1)
.withCryptoConfiguration(new CryptoConfiguration(CryptoMode.AuthenticatedEncryption))
// Can either be Key ID or alias (prefixed with 'alias/')
.withEncryptionMaterials(new KMSEncryptionMaterialsProvider("alias/db-connstring"))
.build();
//System.out.println(amazonS3.getObjectAsString(BUCKET_NAME, PROPERTIES_FILE_NON_ENC));
System.out.println(s3Encryption.getObjectAsString(BUCKET_NAME, PROPERTIES_FILE_ENC));
}
```
I have created an IAM role and assigned to my ECS task:
```
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetObject" ], "Sid": "Stmt1", "Resource": [ "arn:aws:s3:::dev-web-s3/dev-webapp.properties" ], "Effect": "Allow" } ] }
```
I am getting below error:
```
15:31:25,502 WARN [com.amazonaws.http.AmazonHttpClient] (http-/0.0.0.0:8080-1) SSL Certificate checking for endpoints has been explicitly disabled.
15:31:25,523 WARN [com.amazonaws.http.AmazonHttpClient] (http-/0.0.0.0:8080-1) SSL Certificate checking for endpoints has been explicitly disabled.
15:31:26,000 ERROR [stderr] (http-/0.0.0.0:8080-1) com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 6859B84E52157DB7), S3 Extended Request ID: hIneAhT8TYX3P1z8zOGetqrSHhz5AqeOtRQnkCU9IuR0mBpMntFE9TXySu2iYv0Bbs4xONkxRz0=
15:31:26,001 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588)
15:31:26,001 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258)
15:31:26,001 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
15:31:26,001 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
15:31:26,002 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
15:31:26,002 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
15:31:26,002 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
15:31:26,002 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
15:31:26,002 ERROR [stderr] (http-/0.0.0.0:8080-1) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
```
I am using Fargate to deploy my containers so could not go inside the containers but ran below command and got result as below:
*curl 169.254.170.2$AWS\_CONTAINER\_CREDENTIALS\_RELATIVE\_URI*
```
{
"RoleArn": "arn:aws:iam::281177187806:role/CC_GetPropertiesFile_S3_Role",
"AccessKeyId": "REDACTED",
"SecretAccessKey": "REDACTED",
"Token": "<PASSWORD>",
"Expiration": "2018-03-16T00:21:55Z"
}
```
CC\_GetPropertiesFile\_S3\_Role is the role which was created earlier for which the trusted entities is AWS service: ecs-tasks.
I tried to use the credentials from the CURL command using BasicAWSCredentials options and got below error:
```
com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 81CC7254BCA83D53), S3 Extended Request ID: of46B2ujyTG
```
I am trying to get debug-level logging information but meanwhile any other issues which can be pointed out or alternative solution provided is much appreciated.
Thanks.<issue_comment>username_1: You must use two different Amazon Resource Names (ARNs) to specify bucket-level (ListBucket) and object-level(GetObject) permissions.
Credentials from IAM will have an expiration time, Hence the `does not exist` error.
<https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/>
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::dev-web-s3"]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": ["arn:aws:s3:::dev-web-s3/dev-webapp.properties"]
}
]
}
```
Upvotes: 0 <issue_comment>username_2: I was able to resolve the issue by adding my Custom Role *CC\_GetPropertiesFile\_S3\_Role* in the encryption key which I was using in the java method.
Upvotes: 1
|
2018/03/18
| 810 | 2,536 |
<issue_start>username_0: How is it that you would create a tensorflow vector from a tensorflow constant/variable etc?
For example I have a constant `x` and I want to create a vector which is `[x]`.
I have tried the code below and it doesn't work.
Any help would be appreciated.
```
x = tf.placeholder_with_default(1.0,[], name="x")
nextdd = tf.constant([x], shape=[1], dtype=tf.float32)
```<issue_comment>username_1: First I'd like to define a tensor for you:
Tensors are n-dimensional matrices. A rank 0 tensor is a scalar, e.g. 42. a rank 1 tensor is a Vector, e.g. [1,2,3], a rank 2 tensor is a matrix, a rank 3 tensor might be an image of shape [640, 480, 3] (640x480 resolution, 3 color channels). a rank 4 tensor might be a batch of such images of shape [10, 640, 480, 3] (10 640x480 images), etc.
Second, you have basically 4 types of tensors in Tensorflow.
1) Placeholders - these are tensors that you pass into tensorflow when you call `sess.run`. For example: `sess.run([nextdd], {x:[1,2,3]})` creates a rank 1 tensor out of `x`.
2) Constants - these are fixed values as the name suggests. E.g. `tf.constant(42)` and should be specified at compile time, not runtime (eluding to your primary mistake here).
3) Computed tensors - `x = tf.add(a,b)` is a computed tensor, it's computed from a,b. Its value is not stored after the computation is finished.
4) Variables - These are mutable tensors that are kept around after the computation is complete. For example the weights of a neural network.
Now to address your question explicitly. `x` is already a tensor. If you were passing in a vector then it's a rank 1 tensor (aka a vector). You can use it just like you'd use a constant, computed tensor, or variable. They all work the same in operations. There is no reason for the nextdd line at all.
Now, `nextdd` fails becuase you tried to create a constant from a variable term, which isn't a defined operation. `tf.constant(42)` is well defined, that's what a constant is.
You could just use x directly, as in:
```
x = tf.placeholder_with_default(1.0,[], name="x")
y = tf.add(x, x)
sess = tf.InteractiveSession()
y.eval()
```
Result:
```
2.0
```
Upvotes: 2 <issue_comment>username_2: From you description, it looks like you want to use [`tf.expand_dims`](https://www.tensorflow.org/api_docs/python/tf/expand_dims):
```
# 't' is a tensor of shape [2]
tf.shape(tf.expand_dims(t, 0)) # [1, 2]
tf.shape(tf.expand_dims(t, 1)) # [2, 1]
tf.shape(tf.expand_dims(t, -1)) # [2, 1]
```
Upvotes: 1 [selected_answer]
|
2018/03/18
| 386 | 1,435 |
<issue_start>username_0: I have a "sign up" button. When it is clicked, I would like for it to render a new component, called "SignUp". I would like for the new component to replace the "sign up" button.
Currently, I am using setState so that when the button is clicked, a new component is rendered. However, the button is not being replaced, the "SignUp" component is just rendered beside the button. What may be a good approach for me to replace the button with the new component being rendered?
I have provided my code below:
```
export default class SignUpSignIn extends Component {
constructor() {
super();
this.state = {
clicked: false
};
this.handleClick = this.handleClick.bind(this);
}
handleClick() {
this.setState({
clicked: true
});
}
render () {
return (
{this.state.clicked ? : null}
Sign Up
)
}
}
```<issue_comment>username_1: Well, you're not rendering both components conditionally, only the `SignUp`. Instead of having `null` in your ternary, render the sign in button when `state.clicked === false`:
```
render () {
return (
{this.state.clicked ? (
) : (
Sign Up
)}
)
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: one way to do it is like this , i didnt test it yet but it should work
```
render () {
let show =
if(this.state.clicked){
show =
}
return (
{show}
Sign Up
)
}
```
Upvotes: 1
|
2018/03/18
| 1,559 | 5,362 |
<issue_start>username_0: If a module `A` relies on module `B` and module `B` is upgraded, `A` may break due to changes. My idea is to retest both `A` and `B` after upgrading `B`.
I think the easiest way is just to retest everything that can be retested: download every installed module from CPAN and execute its test scripts.
Is there a way to download and retest?
If there is no way, are there any helpers/API so I can implement such tool?
I basically need to
* query what is installed (including version numbers)
* download and unpack exact versions
* execute tests<issue_comment>username_1: The `cpan` tool that ships with core Perl includes a `-l` option, that directs it to provide a list of installed modules. For example, the last 10 items in the list on my system:
```
$ cpan -l 2>/dev/null |tail -10
Test2::Event::Encoding 1.302073
Test2::Event::Bail 1.302073
Test2::Event::Exception 1.302073
Test2::Event::Subtest 1.302073
Test2::Event::Skip 1.302073
Test2::Event::Info 1.302073
Test2::Event::Diag 1.302073
Test2::Event::TAP::Version 1.302073
JSON::PP 2.27400_02
JSON::PP::Boolean undef
```
As demonstrated here you get a list of modules and version numbers. Sometimes the tool doesn't find the version in META, and therefore will return `undef` for the version numbers. CPAN authors should be on the lookout for these sorts of mistakes, as they make it harder for tools that wish to identify the version without compiling the module itself.
Once you have the modules and version numbers you can use the `cpanm` tool (provided by App::cpanm) with its `--test-only` option to pull down a module of a specific version and test it. You can request a specific version like this:
```
cpanm Some::Module@0.9990
```
(Pulls down only version 0.9990 of the target module).
Where things get tricky are the following: Perl ships with a bunch of modules, and some of them also receive updates via CPAN. The `cpan -l` tool will list all installed modules, including the ones that ship with Perl.
Also some of the modules listed are simply part of a larger distribution.
Another tool that can be useful to you, which has been bundled with Perl since 5.8.9, is `corelist`. If you run this:
```
corelist File::Temp
```
You will get: "`File::Temp was first released with perl v5.6.1`"
If you do this:
```
corelist JSON
```
You will get: "`JSON was not in CORE (or so I think)`"
So it is pretty simple to determine if a module you're looking at in your list is one that ships with Perl or not. Use that information as you see fit.
Another thing you'll have to solve for is what to do about shared dependencies. If the first thing you test is a Moose upgrade, you'll pull in half of CPAN (that's an exaggeration), and that will dirty your environment for testing other modules. To mitigate this effect you have a few choices. One is to leverage `App::perlbrew` and its `lib` option to set up throwaway library space. That way you can install a module and its dependencies in a destination designated by `perlbrew lib` and `perlbrew use`, and then throw it away when done to move on to the next library for testing.
However, there may be a better solution, of which I am not adequately familiar to document here: The toolchain used by CPAN smoke testers. See [CPAN::Testers](http://search.cpan.org/~barbie/CPAN-Testers-0.07/lib/CPAN/Testers.pm "CPAN::Testers") if you wish to pursue this strategy. The smoke testers have worked out relatively lightweight approaches to pulling down and testing modules with their dependencies in an automated fashion.
Finally, another issue you will encounter is that CPAN authors are the ones who decide which versions of their modules live on CPAN and which ones get deleted. A few years ago CPAN authors were encouraged to keep their CPAN repositories clean by deleting old versions. I don't know if this practice is still encouraged, but it does mean that you cannot count on a specific version number still existing. To solve for this problem you should probably maintain your own repository of the tarballs for all versions you have installed at a given moment in time. The CPAN module framework [Pinto](http://search.cpan.org/~thaljef/Pinto-0.14/lib/Pinto.pm) helps with keeping versions of a module, pinning some to not update, and other useful tricks.
Upvotes: 2 <issue_comment>username_2: If you download and extract distribution `B` and `cd` into that directory, you can use the [brewbuild](https://metacpan.org/pod/distribution/Test-BrewBuild/bin/brewbuild) binary from my [Test::BrewBuild](https://metacpan.org/release/Test-BrewBuild) (NOTE: requires Perlbrew or Berrybrew) with the `-R` aka `--revdep` flag to test `B` as well as all distributions that require `B`:
```
~/repos/Mock-Sub $ brewbuild -R
reverse dependencies: Test::Module::CheckDep::Version, App::RPi::EnvUI, RPi::DigiPot::MCP4XXXX, Devel::Examine::Subs, PSGI::Hector, File::Edit::Portable, Devel::Trace::Subs
Test::Module::CheckDep::Version
5.26.1 :: PASS
App::RPi::EnvUI
5.26.1 :: FAIL
RPi::DigiPot::MCP4XXXX
5.26.1 :: FAIL
Devel::Examine::Subs
5.26.1 :: PASS
PSGI::Hector
5.26.1 :: PASS
File::Edit::Portable
5.26.1 :: PASS
Devel::Trace::Subs
5.26.1 :: PASS
```
I do this every time I update one of my CPAN distributions that have reverse dependencies, so I don't break any consumers of my dists.
Upvotes: 2
|
2018/03/18
| 2,223 | 5,257 |
<issue_start>username_0: I have a file with mixed delimiters `,` and `/`. When I import it into SAS with the following data step:
```
data SASDATA.Publications ;
infile 'R:/Lipeng_Wang/PATSTAT/Publications.csv'
DLM = ','
DSD missover lrecl = 32767
firstobs = 3 ;
input pat_publn_id :29.
publn_auth :$29.
publn_nr :$29.
publn_nr_original :$29.
publn_kind :$29.
appln_id :29.
publn_date :YYMMDD10.
publn_lg :$29.
publn_first_grant :29.
publn_claims :29. ;
format publn_date :YYMMDDd10. ;
run ;
```
the sas log shows that
```
NOTE: Invalid data for appln_id in line 68262946 33-34.
NOTE: Invalid data for publn_date in line 68262946 36-44.
RULE: ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9
68262946 390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0 62
pat_publn_id=390735978 publn_auth=HK publn_nr=1053433 publn_nr_original=09/465 publn_kind=054
appln_id=. publn_date=. publn_lg=2010-03-26 publn_first_grant=. publn_claims=0 _ERROR_=1
_N_=68262944
NOTE: Invalid data for appln_id in line 68280355 33-34.
NOTE: Invalid data for publn_date in line 68280355 36-44.
68280355 390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0 62
pat_publn_id=390753387 publn_auth=HK publn_nr=1092990 publn_nr_original=60/523 publn_kind=466
appln_id=. publn_date=. publn_lg=2010-03-26 publn_first_grant=. publn_claims=0 _ERROR_=1
_N_=68280353
```
it seems that i need to file '60/523,466' into the volume of 'publn\_nr\_original'. but what should I do for it?<issue_comment>username_1: Ok, I see what you mean - you have a field with a comma, in a comma separated file, and that field is not quoted.
For this you will have to read the two parts in seperately and add the comma back in, as per example code below.
It's worth noting that all your values must have commas for this approach to work! This in fact looks like bad data, if your input field is indeed "60/523,466" then it should be "quoted" in your input file to be read in correctly.
```
%let some_csv=%sysfunc(pathname(work))/some.csv;
data _null_;
file "&some_csv";
put /;
put '390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0';
put '390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0';
run;
data work.Publications ;
infile "&some_csv" DLM = ',' DSD missover lrecl = 32767 firstobs = 3 ;
input pat_publn_id :best. publn_auth :$29. publn_nr :$29.
publn_nr_original1 :$29. publn_nr_original2:$29.
publn_kind :$29. appln_id :best.
publn_date :YYMMDD10. publn_lg :$29. publn_first_grant :best.
publn_claims :best. ;
format publn_date YYMMDDd10. ;
publn_nr_original=cats(publn_nr_original1,',',publn_nr_original2);
run ;
```
Upvotes: 0 <issue_comment>username_2: Your program code has two obvious issues.
First your syntax on the FORMAT statement is wrong. The `:` modifier is a feature of the `INPUT` or `PUT` statement syntax and should not be used in a `FORMAT` statement.
Second you are trying to read 29 digits into a number. You cannot store 29 digits accurately into a number in SAS. If those values are really longer than 15 digits you will need to read them into character variables. And if they really are smaller numbers (that could be stored as numbers) then you don't need to include an informat specification to the `INPUT` statement. SAS already knows how to read numbers from text files. In list mode the INPUT statement will ignore the width on the informat anyway.
But your error message looks to be caused by an improperly formed file. I suspect that one of the first 6 columns has a comma in its value, but whoever created the data file forgot to add quotes around the value with the comma. If you can figure out which field the comma should be in then you might be able to parse the line in a way that it can be used.
Here is one method that might work assuming that the commas only appear in the `publn_nr_original` variable and that at most one comma will appear.
```
data want ;
infile cards dsd truncover firstobs=3;
length
pat_publn_id $30
publn_auth $30
publn_nr $30
publn_nr_original $30
publn_kind $30
appln_id $30
publn_date 8
publn_lg $30
publn_first_grant $30
publn_claims $30
;
informat publn_date YYMMDD10. ;
format publn_date YYMMDDd10. ;
input @;
if countw(_infile_,',','mq')<= 10 then input pat_publn_id -- publn_claims ;
else do ;
list ;
input pat_publn_id -- publn_nr_original xxx :$30. publn_kind -- publn_claims ;
publn_nr_original=catx(',',publn_nr_original,xxx);
drop xxx;
end;
cards4;
Header1
Header2
1,22,333,4444,55,6666,2010-03-26,77,8,9999
390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0
390735978,HK,1053433,"09/465,054",A1,275562685,2010-03-26, ,0,0
390753387,HK,1092990,60/523,466,A1,275562719,2010-03-26, ,0,0
;;;;
```
But the real solution is to fix the process that created the file. So instead of having a line like this in the file:
```
390735978,HK,1053433,09/465,054,A1,275562685,2010-03-26, ,0,0
```
The line should have looked like this:
```
390735978,HK,1053433,"09/465,054",A1,275562685,2010-03-26, ,0,0
```
Upvotes: 1
|
2018/03/18
| 375 | 1,425 |
<issue_start>username_0: I'm trying dynamically to add additional text to html. I get the text but it's not in h3. Any ideas why?
```
let brElement = document.createElement("br");
let pickCategory = document.createElement("div");
let h3Element = document.createElement("h3");
let newPick = document.createTextNode("Изберете категория МПС: ");
document.body.appendChild(pickCategory.appendChild(brElement.appendChild(h3Element.appendChild(newPick))));
```
The result is: new div, new line the text I want, but not an h3. Why?!<issue_comment>username_1: The return value of appendChild is the appended child. Not the parent which is what would be required for your code to work as intended.
Upvotes: 2 <issue_comment>username_2: appendChild returns the child element, so chaining them doesn't work (<https://developer.mozilla.org/en-US/docs/Web/API/Node/appendChild>).
Just make the append multi line and it should work
Upvotes: 1 <issue_comment>username_3: Worked like a charm. Thank you both:
```
let brElement = document.createElement("br");
let pickCategory = document.createElement("div");
let h3Element = document.createElement("h3");
let newPick = document.createTextNode("Изберете категория МПС: ");
h3Element.appendChild(newPick);
brElement.appendChild(pickCategory);
pickCategory.appendChild(h3Element);
document.body.appendChild(pickCategory);
```
Upvotes: 0
|
2018/03/18
| 259 | 853 |
<issue_start>username_0: my main menu is aligned in left I want to move my main menu items in the center of the page.I'm using protostar template.<issue_comment>username_1: Assuming you are using Joomla 3.5 or later, create a file at: `templates/protostar/css/user.css` if it doesn't already exist and add the following contents to the file:
`.navigation {
text-align: center;
}
.navigation .nav {
display: inline-block;
}`
Upvotes: 1 <issue_comment>username_2: You open the file template.css in folder **templates\protostar\css\template.css** if you use template protostar.
And find the line of code:
```
.navigation .nav-pills {
margin-bottom: 0;
}
```
add code:
```
.navigation .nav-pills {
margin-bottom: 0;
display: flex;
justify-content: center;
}
```
[As the following image](https://i.stack.imgur.com/VLFMv.png)
Upvotes: 0
|